Monthly Archives: October 2012

Date validation for exchanged data

A question arose in the PCEHR program: is the date 00010101 a valid date in CDA, and if not, what is the valid date range allowed?

Well, firstly, as far as the TS data type is concerned, 00010101 is a valid date – the 1/1/01, the nominal year of Jesus’ birth (only he wasn’t born that year). But just because it’s legal according to the type doesn’t mean that it makes sense – especially as the date of onset of a patient’s problem. This caused some discussion about what dates the national program should accept for clinical dates – what should be valid? When I looked around, I discovered remarkably little good information about the general subject of what dates are reasonable to accept in clinical records.

Here’s some suggestions for the kind of date ranges to test clinical programs against:

  • The oldest living Australian was born 1901. There doesn’t appear to be any need to test clinical programs for older date of births than that (other countries go a little further back, but not far)
  • For medicare testing, Medicare has a system limitation – no creation of records for DOB greater than 80 yr old
  • Date of onset of a condition or a diagnosis could conceivably be up to or before date of birth, but not before date of birth for older Australians, so anything to 1901. For these dates, years, and year/month should be acceptable to the system
  • Date of a clinical action – it’s really difficult to imagine records going back prior to 1980 for actual clinical records (date of action, prescription, immunization), but I know of records going back to the late 80s
  • Scheduled dates – up to 10 years in the future

There’s no real reason to accept dates in the future for other than scheduled dates, though now + a few seconds is always a rational thing to allow.

While these seem reasonable limits – anything outside these is quite likely to be a clerical error (I’ve seen appointments for 3013, date of birth of 23-Apr 1048 etc). But should a system reject dates like this? In most cases, a human inspecting the date will be able to infer the likely clerical error if it’s bad enough to fall outside these reasonable ranges (they won’t pick up subtle errors, but neither can these rules). On the other hand, a computer trying to process these rules can only know that the dates are wrong – no inferences there. So I’m not sure when systems should reject dates that fall outside the reasonable dates ranges I put above.

btw, I’m not sure what the pcEHR is going to do, and couldn’t provide formal advice here anyway. Comments are welcome, particularly from clinical application test folks (thanks to one who contributed the information about medicare age restrictions, btw).

The pulse of the HL7 profession

David More drew my attention to “New poll takes pulse of HL7 profession“, and I couldn’t resist making some comments on it.

Most HL7 professionals have experience, but generally have limited tenure at their specific employers.

yep, that’s my experience. And very often this is a reflection of basic economics: organisational requirements for integration come and go, and the value of people who can fulfill these requirements varies accordingly. But people tend to respond badly to holding a fixed job with a variable income – hence the usual resolution to this is a contract/consultant model, or some other variation of a mobile workforce. And this is why many HL7 insiders are contractors or consultants like myself.

Also, they said:

About one-third of all organizations polled are currently lacking staff retention strategy

yep, this is certainly true. Considering my comments above, it’s a hard policy to have. But it’s also crucial, and my number one recommendation to vendors in particular: know your retention policy for integrators, because it will be tested.

The next one made me laugh:

Interface engines show room for improvement. Nearly half of all respondents – including 49.5 percent of CIO/CTOs, 46.6 percent of IT managers and 42.7 percent of HL7 professionals – report that while they’re using their interface engine for what it was initially intended, they know it has more capabilities they are not using

So, since an interface engine has capabilities people aren’t using, it needs “improvement”? That’s an, umm, surprising conclusion. Or maybe, it’s just that what people needs varies? And that the problems are hard, so the solutions are hard?

Finally, security:

While seen as important, security is not listed as a top organizational priority. Just two percent of survey respondents said information security was one of their top their priorities. But when asked how information security affects their top priorities, 89 percent of CIOs and 90 percent of IT managers said security is either integral to at least one their organization’s top priorities, or it is their top priority.

That’s primarily because integration is sufficiently difficult by itself without introducing security to the mix. In practice, you secure comms, and then treat the exchange as trusted. This simple policy mostly delivers the goods, and means that secrity need not be a particular concern for your HL7 integration experts (not compared to all the other things they need to be concerned about).

 

FHIR ballot issue: Identifying an extension

This issue is a follow on to the previous post about representing identifiers in FHIR.

A FHIR extension is represented this way:

 <extension xmlns="http://hl7.org/fhir">
  <code><!-- 1..1 id Code that identifies the meaning of the extension --></code>
  <profile><!-- 1..1 uri Profile that defines the extension --></profile>
  <!-- content of the extension -->
 </extension>

So an extension is identified using a system:id pair, as discussed in the previous post, only called “profile” and “code” in this case. However this is a technical identifier, and the identification of the extension should simply be a url, like this:

 <extension xmlns="http://hl7.org/fhir">
  <definition><!-- 1..1 uri Where to find the definition of the extension --></definition>
  <!-- content of the extension -->
 </extension>

Where the definition would typically follow the pattern <code>[url of profile]#[Profile.extensionDefn.code]<code> like this:

 <code>http://hl7.org/fhir/registry/profiles/@iso-21090#updateMode<code>

if it references a formal extension definition (as it should). (btw, the registry doesn’t exist at that url, nor have we named the ISO 21090 profile, or it’s contents)

Discussion

This is a FHIR ballot issue. Comments on this blog post are discouraged – unless to point out outright errors in this analysis. Discussion will be taken up on the FHIR email list, with substantial contributions added to this wiki page. A doodle poll will be held for the final vote – this will be announced on the FHIR email list.

FHIR Ballot Issue: Representation of Identifiers

Theory

Most identifiers have two logical parts: the part that differs for each thing being identified, and the constant part that identifies the identifier itself.

The second part, the part that identifies the identifier itself, that‘s often taken as read, particularly in the old style work practices where each institution is an island to itself. Here, an institution assigns an MRN (medical record number) to a patient, and everyone exchanges the MRN, and just knows, by context, that it’s the local institutions MRN.

This pattern breaks down as soon as more than one institution starts exchanging information. The normal initial response from is to simply add a new field for each institution, so they all know each other’s identifier, but it rapidly becomes clear that you can’t go on like this, adding a new field for each participating institution.

The next step is to convert to a list of identifiers, where each has a name that identifies the local identifier. Using local names (“Acme Hospital”) won’t scale either, so the thing to do it to assign a namespace to the identifier, a formal naming system that identifies the identifier. To do this, you can either assign an opaque namespace (either OID or UUID) and keep a central registry of the namespaces and their usage, or assign a self-identifying namespace, which in practice means a URI.

Aside: in v3, we decided that self-identifying namespaces wouldn’t work, that formal registration would be required in order to make this reliable. But in practice, curation hasn’t been funded, and isn’t working for the purpose that was intended. URIs will work better than OIDs/UUIDs

But once you start thinking in terms of URIs, why not simply make the identifier explicitly a URI, and keep the whole identifier always as a URL? In fact, that’s explicitly the way that the W3C is going, and the semantic web, and it’s certainly solid and scalable. (well, at least, as solid as your choice of specific URI is).

Exchange

So, how to exchange identifiers? I know of 3 options for this:

  1. Exchange local identifiers without any namespace. This is by far the most efficient in a local institution context, and still how we do things in v2, and still the majority practice, but it’s starting to become a legacy way to think.
  2. Exchange identifiers in two parts: namespace + identifier. This works really well where the context is in transition: legacy work practices in a wider context
  3. Exchange identifiers as single global identifiers

In v2, HL7 allows #1 and #2/#3 are hard: they require local agreement, which is kind of odd (you need local agreement to use identifiers that fall outside your local agreements…)

In v3, we required #2 or #3, though we only allowed #3 with OIDs and UUIDs, not URIs. #1 is not possible, which is an issue.

In FHIR, so far, we have a mix: some things – primarily technical identifiers – are specified as URIs. They can either be absolute or relative URLs (cases #3 and #1 respectively), and the absolute URLs can be urn:uuid: or urn:oid:. In other contexts – identifiers which are not part of the implementation framework, but external identifiers – we have used the Identifier type, which has two parts: system, and id. System is the namespace for the identifier – the system under which it was published, and id is the identifier itself. The system is a uri, which can be a OID or a UUID, for alignment with v3, or it can be anything else.

This type handles cases #1 (just an id) (a change agreed but not yet published in the FHIR site) and #2 (system and id). It also handles #3 by setting system to “urn:ietf:rfc:3986”, which identifies the id as a full uri.

The same pattern is used in the Coding Type, where the code is identified by a code and a system. However in the Coding type, the system is not just the namespace, it’s the logical definition of the terminology/classification/codes/enumeration, so you can’t use “urn:ietf:rfc:3986” here.

Open Issue

One regular comment that primarily comes from W3C/semantic web kind of folks is why to bother with the double form (system:id)? Why not simply use a URI, and allow either absolute or relative URLs? (And this has been made specifically as part of the FHIR ballot)

If you did this, you would handle #2 by defining a URI form for the concept so that it’s a single identifier, and then systems that want/need to do to case #2 can simply extract the local identifier out of the URL following the rules for the URL.

This has an advantage of making things simpler for case #3 (no longer need the “urn:ietf:rfc:3986”) and being consistent with the W3C / web approach.

However I see several issues with this approach:

  1. Variability within URIs

It’s common practice to conflate “identification” and “access” in URIs. Indeed, this is a primary advantage of them, but it’s also a problem. A typical example is Twitter, where I can be identified by:

All different URLs, but the same concept. If there’s a way to know which of these is the formally correct one that should be used to identify me, I didn’t find it in 5 minutes of googling, though my Twitter preferences page indicates that https://twitter.com/GrahameGrieve is the preferred id.

But this does rather complicates matters for the URI approach.

Behavior is specific to a URL

It’s not as simple as just pulling the terminal portion of the URL off. Consider the forthcoming specification from IHTSDO for identifying a concept by a URI. In this, concepts are identified by the general URI:

 http://snomed.info/id/{sctid}

However there’s also a form

 http://snomed.info/id/{sctid}/{aspect}

which also identifies the concept, but further identifies how it’s used. Still, it’s the same concept. There’s also this form:

 http://snomed.info/sct/{sctid}/version/{timestamp}

I’m not saying that these forms shouldn’t be defined, or should be defined differently. They are each defined for a purpose (though I do think that if you’re going to use http: in a URI, you better organize for the URL to mean something). What this does show, however, is that processing the URI form is URI specific.

There’s no general solution

If we took away the split system:id approach currently supported in Coding, and insisted on a full URL, we’d need to get a URL system defined for everything…. It’s just not feasible. It’s been as much time as I can afford to simply get aligned on use of http://loinc.org and http://unitsofmeasure.org as URIs to identify LOINC and UCUM codes respectively.

Aside: the initial versions of FHIR defined http://hl7.org/fhir/sid/loinc and http://hl7.org/fhir/sid/ucum respectively for these. I have received some strong comments that we shouldn’t use these, because we don’t own the loinc and ucum concepts. Well, of course we don’t own them. But nor do we own loinc.org and unitsofmeasure.org, so that we can simply assign these as the correct URIs (and create an expectation that they’ll resolve to an actual reference in a browser). So it has to be by negotiation. We still have some things using http://hl7.org/fhir/sid/… But this is absolutely not an assertion that we own them, only an assertion that this is how we identify that this is what they are.

Discussion

This is a FHIR ballot issue. Comments on this blog post are discouraged – unless to point out outright errors of fact in this analysis. Discussion will be taken up on the FHIR email list. A doodle poll will be held for the final vote – this will be announced on the FHIR email list.

 

Question: How are alerts exchanged in v2?

Question:

How are “High Risk Alerts” communicated between HIS and PACS? If there is a ‘High Risk Alert’ created for patient (ex . Pace Maker), in which HL7 Message should be communicated to PACS?

Answer:

There’s nothing explicitly defined for this kind of alert in HL7 v2 so far as I can see. The segment IAM is for “Patient adverse reaction information” (defined in v2.4) and could definitely carry related information around alerts, but not that kind of alert.

If it was up to me, and given the context of HIS –> PACS communication, which pretty much implies ADT, I’d use the OBX segment for Observation/Result that appears in most ADT messages. I think that’s within it’s intended purpose, though information about the purpose of the OBX segment in the specification itself is pretty thin.

 

GPL v3 and Java programs

The GPL v3 includes this definition:

A “covered work” means either the unmodified Program or a work based on the Program.

and states that covered works are also covered under the GPL v3 (which is what makes it a viral license). But the big question is, what is work that is based on the program? The only clarification that I could find with regard this is from the FAQ:

If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them. If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program.

“fork and exec” is a unix term. Dynamic linking is a not a natural java concept. So how does that apply to a java program? I couldn’t find any good information about this (lots of opinion, but nothing reliable). So I sought clarification from the FSF around this. Here, reproduced in the spirit of free speech, is their response, from Joshua Gay, the licensing & compliance manager at FSF:

The latest version of Oracle’s Java ProcessBuilder class provides the functionality for you to make *simple* fork and exec function calls that can be run on most operating systems and that are equivalent to those you would expect to find on any UNIX-like operating systems. If your program uses only the *simple* fork and exec functionality provided by the ProcessBuilder class to invoke and communicate with a GPL covered work, then your program and the GPL covered program can most likely be considered separate programs; therefore, the GPL would make no requirements on your program.

Note the emphasis on *simple*. The ProcessBuilder class allows the calling program to pass variables to the invoked program. You could use this functionality to “share data structures”, though presumably passing a file name reference wouldn’t qualify as “sharing data structures” between programs.

Context

I was interested in using some code covered by the GPL v3 in the FHIR build process (PlantUML, to automatically draw UML diagrams). But we could not take the risk that using GPL licensed code would create GPL based obligations for the build program and it’s source, because that includes the java reference implementation, and I didn’t want that to be covered under the GPL (it’s covered under a much more permissive licence that creates a different sort of freedom). However the question ended up being a moot question because the provider of PlantUML released a modified version that is licensed under LGPL, for which I am deeply appreciative.

Qualifiers

I am not a lawyer, and advice I got from a lawyer isn’t legal advice to you either.

From a legal sense, the GPL is probably to be understood as a contract between the user of some software, and the provider of the software. There is very little case law around the world with regard to how this kind of contract would be understood by the courts. Note that the definition of “covered work” as provided at the top would be interpreted by the court, and it would be at the discretion of the court whether to give weight to any advice such as that provided above. Add to this the breadth of variation in understanding contract and IP law around the world, and the situation is very unclear to me. (I recommend Heather Meeker‘s “The open source alternative: Understanding risks and leveraging opportunities” for interested parties)

Still, the opinion here has value for me, because it’s not so much the actual legal position that matters, as of people’s opinions (perception is reality…). And here, the FSF’s opinion has real weight.

 

 

 

Australian IHE Connectathon

I have been at the IHE Australian connectathon all week. I came to test the HL7Connect XDR implementation – send and receive CDA documents/packages as part of an XDS infrastructure.

There’s need a little bit of confusion about XDR, quite what it is. XDR is a document push using the same interface as XDS, but without the XDS roles and so forth to back it up.So it’s just a way to transfer content. There’s no expectation about the particular contect, nor what happens after it’s transferred. From my point of view, this is interesting because it establishes the capability for placing an interface engine between the document source and the document repository in order to do the kinds of things that interface engines do – patch the data to deal with differences between the source and destination context.

Of course, people are working hard to ensure that this isn’t actually needed, but for now I’m confident that these kinds of requirements aren’t going away anytime soon. Of course, digital signature requirements stand in the way of this, and it’s going to be interesting as the goal for integrity and assurance in the documents runs into the very real world obstacles that stand in the way of integrity and assurance (and these aren’t the technical ones that the people who push the security line are thinking of).

Anyhow, the connectathon has been an interesting process, and I’ve met a bunch of new and interesting people, which has been great.

Unfortunately, I haven’t passed the tests, and haven’t gained certification. It’s not that my ATNA/XDR implementations aren’t up to scratch – they are, and we’ve been passing content around insecurely just fine. The problem is that you have to do this securely, and my SSL / certificate hacking skills haven’t been up to the task of making the IHE test certificates work. It’s not at all helped by the obscurity and arcanery of the toolkits I have available – one that won’t load the private keys with error “wrong format or password”, and one that reports “-1” whenever anything goes wrong. Even when I can step them, I don’t learn anything.

If I’m going to pass a connectathon, my knowledge and tools need to improve an order of magnitude. It troubles me somewhat that my skills to pass the connectathon testing will need to be greater than I’d need in the real world (get a proper cert from a proper CA, and use it). The connectathon requires multiple certificates from multiple custom CAs… it’s been too much for me. Of course, the problems I have might be in my SSL/TLS tooling (particularly around the network binding) – I don’t know enough to get CA verification working with them, and I don’t know why it’s not working.

I don’t know whether this IHE checking is good or not – evidently some secure communications is needed, but spending half the connectathon fighting with the libraries to get them to use the IHE certificates (I’m not the only one)… I don’t think this is a good use of connectathon time. But I don’t know what else IHE can do.

Anyway, my XDR and ATNA works, both ends. I’ll retreat and lick my wounds and work on my security skills. And for those people who think we should add security requirements for FHIR Connectathons… I don’t think that’s a good idea.

HL7 Email List setup

I have a gmail account (grahameg) and also health intersections email is hosted on GMail. in order to keep all my mail in one place, I forward everything from healthintersections.com.au to my main gmail account. But I use @healthintersections.com.au as my main presence – so set up GMail to set this as the reply-to address. But on my iPad, the standard Mail program doesn’t let me set the reply-to address. So if I answer an email from an HL7 list from my iPad, the response comes with a reply-to of my GMail address.

Now on the HL7 List side, it only allows submissions from an email address that is subscribed to the list, and it bases this on the reply-to address (and worse, it doesn’t tell you that the mail wasn’t acceptable, and GMail doesn’t always show you the echo of your own mails, so you don’t know).

So I have a choice: I can either

  • only reply to emails from gmail
  • only reply to emails from my iPad
  • or subscribe both addresses to the email list

None of these options are really palatable. Especially because the sign up page for email lists only allows me to sign up to a list once (though it knows about the fact that I have multiple email addresses, since I signed up with it).

There’s two solutions to the underlying problem. One is documented here – that you can set up iPad to use a different reply-to: address, if you don’t set it up as a GMail account. That’s the method I followed, though what with google’s two factor authentication (you do use an email client with two factor authentication, don’t you), this was a bit of mucking around – I still needed a GMail account for calendars after all.

But if that doesn’t work out for you, and you still have to sign up multiple times, here’s what the HL7 webmaster advises:

The listserv account structure is based entirely on the email address and there is no way to relate one email address to another email address. The HL7 site does expand slightly on this setup in that you can have multiple email addresses under your account but you can only subscribe one email address to a listserv using the HL7.org interfaces.

Should you wish to subscribe two emails to the same listserv there is a way but it is a bit more complex to set up and manage, and isn’t something that we actively support. Using these methods you can have multiple email addresses which can either Send and Receive or just Send. I’ve detailed out the steps below:

  1. 1.  Subscribe one email address to the listserv using the interfaces on the HL7.org website (My HL7 -> My Listservs). This email address will be able to both send and receive emails through the listserv.
  2. 2.  Now add your second email address directly through the listserv. This email address will be able to send email through the listserv but will not receive.
    1. Log out of the HL7.org website
    2. From the My HL7 -> My Listservs page click the “Subscribe to list services” link on the right
    3. Enter the information for your secondary email address
    4. Confirm your subscription via the email link which will be sent to your secondary email address
  3. 3.  Update the subscriptions for your secondary email address to not send you email (You can still send to the list, this will just prevent it from sending you duplicates)
    1. Return to the HL7 -> My Listservs page and now click the “Lyris list manager” or “Lyris listserv interface” link on the left (the label may change depending on whether you are logged in or not)
    2. Log into the Lyris List Manager using your secondary email address and the password you set up in step 2.b.
    3. Click a forum that you would set to not send you email
    4. Click the My Account button on the left
    5. Change the Membership type to “No email”
    6. Click the Save Changes button
    7. Repeat 3.c. – 3.f. for each forum you’d like not to receive email for

Note that you can switch which email addresses receive email and which don’t. The HL7.org website does allow you to edit the subscription method for subscriptions which were made there to set them to No Mail as well. To do you, from the My HL7 -> My Listservs page you’ll want to click the Edit link next to your subscription and then change the subscription format to No Mail.

There’s a wider issue with HL7 email lists – the reply-to of the list is set to the sender, not the list. People (including me) habitually reply to all, which means that

  • anyone involved in the thread starts receiving multiple copies of the thread exchanges
  • if the lists are running slow – which they usually are – then the insiders on the thread can power through a discussion before anyone else can even join in

Supposedly the lists are set up this way to stop bounces going to the list. Though I’m on many other lists that aren’t setup this way, and bounces usually aren’t a problem (I think I see a bounce avalanche – two different subscribers bouncing each other’s bounces – about once a year).

I’d really like to see this changed – HL7 has enough problems with insider-ness without adding to it in the key engagement space.

NEHTA Clinical Documents: Understanding the Rendering Specification

The CDA Rendering Specification is a highly technical specification that describes the basic rules for how CDA documents must be presented. The document itself doesn’t provide it’s rationale and scope – what it does and doesn’t do (and why) – in language that is comprehensible to a non-technical user. This blog post is an attempt to fill that void.

The scope of the Rendering Specification

If clinicians are going to exchange documents between each other, then all the participants – the authors, the readers, the system vendors, and the system observers  (i.e. government, clinical safety reviewers, etc) –  need to be confident that everyone will see the documents correctly, that the recipient will see what the author intended for them to see.

The CDA Rendering Specification exists to solve this problem – to make “rendering” the document predictable, so that everyone sees it the same way.

Background

A CDA document contains two parts:

–          The header, which is a set of data fields with contents

–          The body, which is a logical description of the document content and how it should be presented to the user, with paragraphs, tables, lists, styles etc.

When a CDA document is “rendered”, both the header and body are subject to some transform that presents this to a human on the screen following the instructions in the document. Roughly, the process can be thought of like this:

The two parts of the CDA document – the header and the body – are transformed to some common intermediate format that controls the rendering, and this is fed into some engine that presents this to a user on media (i.e. screen, or sometimes paper).

The portion between the CDA document and the media is called the “Rendering System”. Usually, the rendering system is based on web browser technology: the first step is called a “transform”, the render control data is html, and the presentation is a browser like Internet Explorer. However it is not necessary to use a browser based rendering system: all that is required is that however the document is presented, the user sees what the author intended.

The rendering specification applies to the process from CDA to media – it says what the process must achieve, or how the rendering system must behave. This way, users can be confident that the desired outcome (predictable rendering) is achieved, but NEHTA does not impose any particular tooling/architecture on the various systems that present CDA documents as part of the PCEHR and wider eco-system.

As part of this, the rendering specification describes what parts of the header data have to be shown, and lays out some general ground-rules for how these are laid out. These rules are intended to create confidence and clinical safety, while offering reasonable flexibility to meet existing application approaches and guidelines.

In addition, the rendering specification describes the styles and formatting features that authors are allowed to use, and that rendering systems must display correctly – font colour, size, styles, table layout control, and most of all, the use of fixed width fonts to support ASCII tables which are used ubiquitiously throughout healthcare. Note that therefore the rendering specification leaves the author in control of the layout and styling of the document.

Finally, the rendering specification also lays down the basic infrastructure and associated rules to manage versioning of the rendering system, so that additional styling features and presentation rules can be safely introduced in the future.

Together these features deliver predictability and therefore clinical safety to the CDA document eco-system.

Limitations of the Rendering Specification

The Rendering specification does not concern itself with how the content of particular CDA documents should be presented – what layouts and styles etc should or shouldn’t be used for a referral, etc. It is solely concerned with creating predictability across the eco-system. A forthcoming document entitled “Best Practices for Presenting Clinical Documents” will address these issues by making recommendations for exactly how the contents of a document should be presented to the user. This document doesn’t disagree with the Rendering Specification – instead, it builds on it by saying which styles and layout features should be used in which circumstances.

Along with the Rendering Specification, NEHTA publishes a stylesheet (XSLT transform). The stylesheet is a partial but conformant implementation of the rules in the rendering specification for a browser based rendering system. The implementation is only partial because a few features must be implemented in the host application that uses the stylesheet (mainly around version checking).

Most implementations in the pcEHR eco-system use this stylesheet, though it is not required. Note that implementations must adapt the stylesheet to work within their own architecture, adapting it for security and access restrictions within their application. As part of the release, a given version of the stylesheet identifies which version of the rendering specification it conforms to. Applications use this information to meet the rendering specification rules around versioning.

 

 

 

Question: ISO 21090 namespace

Question:

I had a first view into the XML-Schema and Schematron of the ISO-21090 datatypes. I did not find any namespace information. In which namespace do these elements “live” in? Or are they in the “nonamespace”? If so, is there a specific reason for this?

Answer:

This is called “chameleon namespacing“: the ISO datatypes are not in any particular namespace, but whatever is assigned to them where they are used. Quoting from the standard:

All elements  shall  be in some  namespace, and the namespace  shall  be defined in the conformance statements of information processing entities that claim conformance with  this International Standard. This International Standard  reserves the namespace “uri:iso.org:21090” for direct applications of these datatypes such as testing environments

HL7 uses them in the urn:hl7-org:v3 namespace. Other organisations can use them in whatever namespace they want.