Archetype Re-use

Last Thursday & Friday @hughleslie and I presented a two day training course on openEHR clinical modelling. Introductory training typically starts with a day to provide an overview – the "what, why, how" about openEHR, a demo of the clinical modelling methodology and tooling, followed by setting the context about where and how it is being used around the world. Day Two is usually aimed at putting away the theoretical powerpoints and getting everyone involved - hands on modelling. At the end of Day One I asked the trainees to select something they will need to model in coming months and set it as our challenge for the next day. We talked about the possibility health or discharge summaries – that's pretty easy as we largely have the quite mature content for these and other continuity of care documents. What they actually sent through was an Antineoplastic Drug Administration Checklist, a Chemotherapy Ambulatory Care Nursing Intervention and Antineoplastic Drug Patient Assessment Tool! Sounded rather daunting! Although all very relevant to this group and the content they will have to create for the new oncology EHR they are building.

Perusing the Drug Checklist ifrst - it was easily apparent it going to need template comprising mostly ACTION archetypes but it meant starting with some fairly advanced modelling which wasn't the intent as an initial learning exercise.. The Patient Assessment Tool, primarily a checklist, had its own tricky issues about what to persist sensibly in an EHR. So we decided to leave these two for Day Three or Four or..!

So our practical modelling task was to represent the Chemotherapy Ambulatory Care Nursing Intervention form. The form had been sourced from another hospital as an example of an existing form and the initial part of the analysis involved working out the intent of the form .

What I've found over years is that we as human beings are very forgiving when it comes to filling out forms – no matter how bad they are, clinical staff still endeavour to fill them out as best they can, and usually do a pretty amazing job. How successful this is from a data point of view, is a matter for further debate and investigation, I guess. There is no doubt we have to do a better job when we try to represent these forms in electronic format.

We also discussed that this modelling and design process was an opportunity to streamline and refine workflow and records rather than perpetuating outmoded or inappropriate or plain wrong ways of doing things.

So, an outline of the openEHR modelling methodology as we used it:

  1. Mind map the domain – identify the scope and level of detail required for modelling (in this case, representing just one paper form)
  2. Identify:
    1. existing archetypes ready for re-use;
    2. existing archetypes requiring modification or specialisation; and
    3. new archetypes needing development
  3. Specialise existing archetypes – in this case COMPOSITION.encounter to COMPOSITION.encounter-chemo with the addition of the Protocol/Cycle/Day of Cycle to the context
  4. Modify existing archetypes – in this case we identified a new requirement for a SLOT to contain CTCAE archetypes (identical to the SLOT added to the EVALUATION.problem_diagnosis archetype for the same purpose). Now in a formal operational sense, we should specialise (and thus validly extend) the archetype for our local use, and submit a request to the governing body for our additional requirements to be added to the governed archetype as a backwards compatible revision.
  5. Build new archetypes – in this case, an OBSERVATION for recording the state of the inserted intravenous access device. Don't take too much notice of the content – we didn't nail this content as correct by any means, but it was enough for use as an exercise to understand how to transfer our collective mind map thoughts directly to the Archetype Editor.
  6. Build the template.

So by the end of the second day, the trainee modellers had worked through a real-life use-case including extended discussions about how to approach and analyse the data, and with their own hands were using the clinical modelling tooling to modify the existing, and create new, archetypes to suit their specific clinical purpose.

What surprised me, even after all this time and experience, was that even in a relatively 'new' domain, we already had the bulk of the archetypes defined and available in the NEHTA CKM. It just underlines the fact that standardised and clinically verified core clinical content can be re-used effectively time and time again in multiple clinical contexts.The only area in our modelling that was missing, in terms of content, was how to represent the nurses assessment of the IV device before administering chemo and that was not oncology specific but will be a universal nursing activity in any specialty or domain.

So what were we able to re-use from the NEHTA CKM?

…and now that we have a use-case we could consider requesting adding the following from the openEHR CKM to the NEHTA instance:

And the major benefit from this methodology is that each archetype is freely available for use and re-use from a tightly governed library of models. This openEHR approach has been designed to specifically counter the traditional EHR development of locked-in, proprietary vendor data. An example of this problem is well explained in a timely and recent blog - The Stockholm Syndrome and EMRs! It is well worth a read. Increasingly, although not so obvious in the US, there is an increasing momentum and shift towards approaches that avoid health data lock-in and instead enable health information to be preserved, exchanged, aggregated, integrated and analysed in an open and non-proprietary format - this is liquid data; data that can flow.

The Times, They Are a-Changin’...

Channelling Bob Dylan? Not quite! But it is interesting to see some emerging HL7 and openEHR activity, at least in this little part of the world – Australia and New Zealand :) Maybe this is a model for the rest of the world - at least food for thought!

For too many long years there appears to have been a palpable barrier between the HL7 and openEHR communities. Some individuals have managed to bridge it, but there has definitely been a reluctance to engage at organisational level. It stems from before my time; I suspect vocal personalities with strong, diverging opinions were at the root. To some, it is a little like a religious argument – where "only my way is the right way"!

Be that as it may - the barrier appears to be softening and became evident to me for the first time back in January last year as I attended the HL7 meeting in Sydney. A full day openEHR workshop was presented by a diverse group of Australian companies plus NEHTA experts; Bob Dolin in attendance, amongst others. Keith Boone tweeted his initial impression of the openEHR approach after I demonstrated our tooling and then blogged about it. My thoughts were captured in my Adventures of a clinician in HL7 post.

Fast forward to 2012…

You may have seen some announcements from New Zealand. Firstly, publication in April of the Health Information Exchange Architecture Building Blocks where they specified "2.3.2 The data definitions of the Content Model shall be formulated as openEHR archetypes" within the "10040.2 HIE Content Model, a framework for the creation of a common set of logical data definitions" document.

And secondly: HL7 New Zealand and the openEHR Foundation signed a Statement of Collaboration - also announced April 2012. Now there's a headline that might have been a surprise to many – HL7 NZ & openEHR clearly intending to work closely together!

Only last Thursday Hugh Leslie & I participated in a seminar, "Bringing the Electronic Health Record to Life," organised by HL7 NZ, Health Informatics New Zealand (HINZ) and the University of Auckland. Prof Ed Hammond, 'the father of HL7', keynoted the meeting: "EHR - The Killer App". In the afternoon mini-tutorials, David Hay presented on FHIR, and Hugh, I and Koray Atalag presented a little about our openEHR work, including clinical knowledge governance and clinician engagement. Koray (a HL7 NZ member and openEHR localisation program coordinator) announced within the meeting that HL7 NZ is the likely organisation to auspice a NZ chapter of openEHR. Now that definitely has to start to change the openEHR/HL7 dynamic somewhat, even if HL7 NZ is a relatively small international affiliate :). The HL7 NZ leadership, to their absolute credit, are certainly not being constrained by any traditional 'turf wars'.

The following day, last Friday, Hugh and I presented a full day workshop on openEHR, again sponsored by HL7 NZ, HINZ and the University of Auckland. As I understand it, this was the first opportunity for the openEHR approach to be socialised with the broader healthIT community in NZ; about 25 in attendance including members of the HL7 NZ Board, vendors, and regional and HealthIT Board reps. The focus was on how openEHR could support the creation of a range of technical artefacts to meet NZ's requirements for CDA messaging (and beyond), generated from a cohesive and governed pool of clinical content models.

Interestingly we had a surprise attendee for the workshop – Ed Hammond joined us for the whole day. I won't presume to guess what Ed has taken away from the day, although he did offer up a comment to the group about the value of exploring use of archetype content directly within CDA.

Post workshop one of the attendees tweeted:

"At #HINZ #openEHR talks last 2 days. openEHR is a fantastic foundation for practical action. Left knowing steps I will take. How cools that!"

And of course, there is an HL7 AU meeting in Sydney early next week entitled "FHIR?  CIMI? openEHR? What's the Future of eHealth & mHealth Standards?" The agenda:

  • Keynote: Ed Hammond (again) – "FHIR, CIMI and openEHR - What's the Future for eHealth Standards?". [It will be very interesting to hear his opinion after last week's openEHR exposure.]
  • Grahame Grieve: "FHIR – What is it? Why has it suddenly become so popular?"
  • Hugh Leslie: "Recent developments in openEHR and CDA", and
  • I'll be reporting on the CIMI project.

It would be an interesting day to be a fly on the wall! 2 HL7-ers and 2 openEHR-ers addressing an HL7 meeting - all exploring alternatives to the current approaches!

So, keep your eye on the space where HL7 intersects with openEHR – might be some interesting developments.

_______________________

Within the openEHR community, and definitely within Ocean Informatics where I work, we are certainly finding that significant interest is being certainly generated from many sources about the process of using standardised and governed openEHR clinical content as a means to generate range of technical artefacts, including CDA. The New Zealand national interest and activity is evident, as outlined above. And in addition:

  • In Australia, NEHTA has piloted the use of clinician-reviewed archetypes from the NEHTA Clinical Knowledge Manager as the start point for generating a number of the PCEHR technical specifications. This work is ongoing and being extended.
  • CIMI, the initiative that grew out of HL7 but is now independent, is seeking to develop an internationally agreed approach to clinical modelling and generation of multiple technical outputs. It has already agreed to utilise openEHR ADL 1.5 as its modelling formalism and is using the openEHR Reference Model as the starting point for developing a CIMI Reference Model. We watch this progress with interest.
  • And Brazil's national program has recently reconfirmed its intention to commence using openEHR.

Whether the final solution is openEHR or CIMI or even something else, I think that the advent of standardised clinical models as the common starting point for generation of a range of technical outputs is upon us. Ignore it at your peril. And specifically, I would suggest that HL7 International should be considering very seriously how to embrace this new approach.

Sticking with the quasi-gospel theme, maybe it is now a bit more like Curtis Mayfield's "People Get Ready":

People get ready There's a train a-coming You don't need no baggage Just a-get on board

Let's leave our baggage behind, get on the 'train' together to collaborate and create something that transcends any health IT domain turf war! Don't get left behind...

Warts and all: Reviewing an archetype

During my visit to HIMSS12 in February, I finally met Jerry Fahrni (@JFahrni) face to face - a pharmacist and Twitter colleague I'd had 140 character conversations over some years. We'd also talked on Skype once about some of the clinical archetypes some time ago, and during our HIMSS conversation I managed to persuade him to take a look at the openEHR community's Adverse Reaction archetype and participate in the community review.

He did, and at my further request he has put ink to blog and has recorded his experience as a newbie reviewer so that others might have some sense of what completing an archetype review entails, warts and all.

Jerry's review, reproduced here:

...According to good ol’ Merriam-Webster an archetype is “the original pattern or model of which all things of the same type are representations or copies: also : a perfect example“. Simple enough, but still too vague for my brain so I went in search of a better explanation which I found at Heather’s blog – Archetypical.

According to the Archetypical site ”openEHR archetypes are computable definitions created by the clinical domain experts for each single discrete clinical concept – a maximal (rather than minimum) data-set designed for all use-cases and all stakeholders. For example, one archetype can describe all data, methods and situations required to capture a blood sugar measurement from a glucometer at home, during a clinical consultation, or when having a glucose tolerance test or challenge at the laboratory. Other archetypes enable us to record the details about a diagnosis or to order a medication. Each archetype is built to a ‘design once, re-use over and over again’ principle and, most important, the archetype outputs are structured and fully computable representations of the health information. They can be linked to clinical terminologies such as SNOMED-CT, allowing clinicians to document the health information unambiguously to support direct patient care. The maximal data-set notion underpinning archetypes ensures that data conforming to an archetype can be re-used in all related use-cases – from direct provision of clinical care through to a range of secondary uses.” That gave me a better understanding of what they were trying to do.

Anyway, when Heather asked me to review the Adverse Reaction archetype I was a little hesitant. The projects I’m asked to be involved with are typically much smaller in scale. This was something different and I felt a little intimidated. My gut reaction was to politely decline, but when someone asks you to do something face to face it makes excusing yourself for some lame reason a lot harder. So I agreed with more than a bit of trepidation.

The openEHR project utilizes a system called the Clinical Knowledge Manager (CKM). In the most basic terms, the CKM is an online content management system for all the archetypes being designed by the openEHR project, and it’s impressive. A more in depth description can be found here.

Logging into the system was simple. The email invitation I received to review the Adverse Reaction Archetype contained a link that took me to the exact location I was supposed to be. From there things got a bit more complicated. The CKM is easy enough to navigate, but the amount of information and navigational elements within the system is staggering. It took me a while to figure out exactly what I was supposed to do. Once I figured it out I was able to quickly go through the archetype, read what other comments people had made and make a couple of minor notes myself. One thing I could never completely figure out was how to save my work in the middle and continue later. Sounds simple enough, but for whatever reason it just wasn’t obvious to me. I ended up powering through my “review” in one extended session because I was afraid I’d lose my place. The archetype itself was impressive. It’s clear from the information and detail that people have spent a lot of time and effort developing the adverse reaction archetype. There’s no question that a lot of great minds had been involved in this work. The definition made sense as did the data that was being collected and presented. The archetype offered flexibility for information gathering that included the simplest form of adverse reaction to complex re-exposure and absolute contraindication notation (this is sorely missing in many systems I’ve used over my career). Overall I had little insight to offer during the review, only a couple of minor comments.

I’d say the entire process was pretty straightforward with some minor complications. Like everything else I’m sure the process would get easier over time and multiple uses.

Thanks Jerry. Your independent and honest opinion is much valued. Perhaps next time... !! (Just joking)

CIMI... one of many crossroads

Grahame Grieve posted CIMI at the Crossroads recently. I can't disagree with a lot of the content, but maybe I'm a bit more of an optimist as I draw some slightly different conclusions. Grahame is totally right about what it has achieved so far:

  • a significant membership roll that has never been achieved before
  • a significant agreement of an initial approach to clinical models - a primary formalism of ADL 1.5/AOM with a commitment to support transformation to isosemantic UML models in a spirit of inclusivity and harmonisation.

And as he points out, the notion that the modelling methodology was chosen independently of the Reference Model is somewhat disconcerting.

"...the decision to choose ADL/AOM as the methodology, while deferring the choice of reference model. While I understood the political reality of this decision, choosing an existing methodology (ADL/AOM) but not the openEHR reference model committed CIMI to building at least a new tooling chain, a new community, and possibly a new reference model.

The cost of this is high; so high that the opportunity created by the foundation of CIMI may likely founder if we see another attempt to reinvent the health IT wheel, yet again.

There are many opinions, and everyone at the CIMI table has their own bias, history, experience. Organisational and personal investment in each existing solutions are high. No one wants to throw away their efforts and 'start again'; everyone wants their work to be the successful and sustained.

The CIMI community do need to make an objective decision if it is to move forward. It may not be result which wins a popularity contest. It is very likely that some members will walks away and keep working as they always have; maybe intending to return when a more mature solution is on offer.

In his paragraph on the pros and cons of openEHR, Grahame very eloquently states:

This is the first choice: pick the least worst established clinical modelling paradigm.

:)

"Least worst" - Thanks Grahame! You could turn that around: the 'best' available so far, where there is no perfect solution!

But it's not a bad principle - to take the least worst and make it better!

The chair of the openEHR board, Sam Heard proposed the following to the openEHR community back in October 2011:

“If the CIMI group chooses to use ADL as the formalism then the openEHR community is prepared to explore the Foundation governance arrangements with the CIMI group and align the two efforts using the structures that are mutually agreed.

Changes to ADL and the openEHR Reference Model may be part of the process to meet the collective needs, and alignment of the shared RM and a reviewed RM for ISO 13606 would also be a major goal. ADL 1.5 would be submitted to ISO as part of this alignment.”

Seems sensible to me - start with a robust candidate and modify/enhance it to meet the collective needs. The latest version of the openEHR RM is clearly one candidate. It has evolved significantly from the 2005 version which forms the basis of ISO 13606. Given that ISO 13606 (parts 1-5) is due for revision this year, perhaps we have a great opportunity for harmonisation. The openEHR community is already starting to develop a proposal for the revision, but a greater achievement would be to align all of these efforts into a new 13606/openEHR/CIMI specification.

This is a difficult task that we are trying to solve. We know that because it has not been solved before.

This is definitely not the first crossroad that CIMI has encountered - don't underestimate the effort that has brought the group to this point - and it will definitely not be the last. What will determine success is keeping the end goal front and centre in CIMI's decision-making; cutting ruthlessly through the political and personal agendas; putting pragmatism ahead of perfection; and a willingness to compromise in order to move forward.

It may not be possible. It could be a hell of a ride. I still think it has the potential make a hell of a difference.

The ultimate PHR?

I've been interested in the notion of a Personal Health Record for a long time. I was involved in the development of HotHealth, which launched at the end of 2000, a not-so-auspicious year, given the dot com crash! By the time HotHealth was completed , all the potential competitors identified in the pre-market environmental scan were defunct. It certainly wasn't easy to get any traction for HotHealth take-up and yet only recently it has been retired. For a couple of years it was successfully used at the Royal Children's Hospital, cut down and re-branded as BetterDiabetes to support teenagers self-manage their diabetes and communicate with their clinicians, but it wasn't sustained.

This is not an uncommon story for PHRs. It is somewhat comforting to see that the course of those such HealthVault and GoogleHealth have also not been smooth and fabulously successful :-)

Why is the PHR so hard?

In recent years I participated in the development of the ISO Technical Report 14292:2012: Personal health records -- Definition, scope and context. In this my major contribution seemed to be introducing the idea of a health information continuum.

However in the past year or so, my notion of an ideal PHR has moved on a little further again. It has arisen on the premise of a health record platform in which standardised health information persists independently of any one software application and can be accessed by any compliant applications, whether consumer- or clinician-focused. And the record of health information can be contributed to by any number of compliant systems - whether a clinical system, a PHR or smartphone app. The focus is on the data, the health record itself; not the applications. You will have seen a number of my previous posts, including here & here!Image

So, in this kind of new health data utopia, imagine if all my weights were automatically uploaded to my Weight app on my smartphone wirelessly each morning. Over time I could graph this and track my BMI etc. Useful stuff, and this can be done now - but only into dead-end silos of data within a given app.

And what if a new fandangled weight management application came along that I liked better - perhaps it provided more support to help me lose weight. And I want to lose weight. So I add the new app to my smartphone and, hey presto, it can immediately access all my previous weights - all because the data structure in both apps is identical. Thus the data can be unambiguously understood and computed upon within the second app without any data manipulation. Pretty cool. No more data silos; no more data loss. Simply delete the first app from the system, and elect to keep the data within my smartphone health record.

And as I add apps that suit my lifestyle, health needs, and fitness goals etc, I'm gradually accumulating important health information that is probably not available anywhere else. And consider that only I actually know what medicines I'm taking, including over the counter and herbals. The notion of a current medication list is really not in the remit of any clinician, but the motivated consumer! And so if I add an app to start to manage my medications or immunisations this data could be also used across in yet another compliant chronic disease support app for my diabetes or asthma or...

I can gradually build up a record of health information that is useful to me to manage my health, and that is also potentially useful to share with my healthcare providers.

Do you see the difference to current PHR systems?

I can choose apps that are 'best of breed' and applicable to my need or interest.

I'm not locked in to any one app, a mega app that contains stuff I don't want and will never use, with all the overheads and lack of flexibility.

I can 'plug & play' apps into my health record, able to change my mind if I find features, a user interface or workflow that I like better.

And yet the data remains ready for future use and potentially for sharing with my healthcare providers, if and when I choose. How cool is that?

Keep in mind that if those data structures were the same as being used by my clinician systems, then there is also potential for me to receive data from my clinicians and incorporate it into my PHR; similarly there is also potential for me to send data to my clinician and give them the choice of incorporating this into their systems - maybe my blood glucose records directly obtained from my glucometer, my weight measurements, etc. Maybe, one day, even MY current medicine list!

In this proposed flexible data environment we are avoiding the 'one size fits all', behemoth approach, which doesn't seem to have worked well in many situations, both clinical systems or personal health records. Best of all the data is preserved in the non-proprietary, shared format - the beginnings of a universal health record or, at least, a health record platform fully supporting data exchange.

What do you think?

Image

"We have the capability!"

We have had the technology for a purpose-built openEHR-compliant 'plug & play' platform for some time; standalone applications have been built, but just recently it appears that the practical reality of an multi-application platform is also about to happen. "We have the technology. We have the capability..." Stay tuned.

...Reminds me of my 1970's hero, Steve Austin, the Six Million Dollar Man. With apologies to Steve:

"We have the technology. We have the capability to make the world's first universal health platform. openEHR will be that platform. Better than ever before. Robust health data...application independent...semantically interoperable!"

[youtube http://www.youtube.com/watch?v=K7zNY0I5JNI&w=480&h=360]

Preserving health data integrity

How valuable do we really think health data is? How seriously do we take our responsibility to preserve the integrity of our health data? Probably not nearly as much as we should.

Consider the current situation of most clinicians or organisations when purchasing a clinical EHR system. What do they look for? Many possible answers are obvious, but there is one question that I suspect very few are asking. How many consider what data they will be able to export and convert to another format, preserving the current data integrity, at the end of the typical 5-10 year life span of the application? Am I wrong if I suggest it is not many at all?

Despite all the effort that we clinicians put into entering detailed data to create a quality health record we don't seem to often consider the " What next?" scenario. How much and precisely what data will we be able to safely extract, export, transfer or convert into the next, inevitable, clinical system? Ironically, we are simultaneously well aware that clinical systems have a limited technical life span.

Any and all of the health data in a health record is an incredibly valuable asset to the holder, to the patient (if these are not the same entities) and to those downstream with whom we may share it in the future - in terms of $$ invested; manpower resources used to capture, store, classify, update and maintain it; and not least the future value that comes from appropriate and safe clinical decisions being made upon the integrity of existing EHR data.

Yet we don't seem to consider it much... yet. However, as more clinicians are creating increasing amounts of isolated pockets of health data, we should be thinking about it very hard.

Every time we change systems we put our health data at risk - risk of absolute data loss and risk of possible corruption during the conversion. The integrity of health data cannot be guaranteed each time it is ported into a new system because current methods always require some kind of intervention - mapping, transformations, tweaking, 'cleaning', etc. Small errors can creep in with each data manipulation and which over time, can compromise the safety and value of our health data. On principle we know that the data should not be manipulated, but being limited by our traditional approach to siloed EHR applications, we have previously had little choice.

We need to change our approach and preserve the integrity of our health data at all costs. After all it is the only reason why we record any facts or activity in an electronic health record  - so we can use the data for direct patient care; share & exchange the data; aggregate and analyse the data, and use the data as the basis for clinical decision support.

We should not be focused on the application alone.

Apps will come and go, but we want our health data to persist - accurate and safe for clinical use - beyond the life span of any one clinical software application.

I've said this before, but it's worth saying many times over:

It's. all. about. the. data.

One of the key benefits of the openEHR paradigm is that the data specifications (the archetypes) are defined independently of any specific clinical system or application; are based on an open EHR architecture specification; and are publicly available in repositories such as the Clinical Knowledge Manager. It means that any data that is captured according to the archetype specification is directly usable by any and all archetype-compliant systems. Plus the data is no longer hard-wired into a proprietary application so that it is orders of magnitude easier to accurately share or transfer health data than it has before.

Clinical system vendors that don't directly embrace the archetype-technology may still 'archetype-aware', and can choose to use the archetype specifications as a means to understand the meaning of existing archetyped data and integrate it appropriately into their systems. Similarly they can map from their non-openEHR systems to the archetype specifications as a standardised method for data export and exchange.

The openEHR paradigm enables potential for archetype-compliant systems to share the same archetyped data repository - along the lines of an Apple platform 'plug & play' approach, with applications being added, removed or updated to suit the needs of the end-users, while the data persists intact. No more data conversions needed.

Adapted from Martin van der Meer, 2009

Now that's good news for our health data.

CIMI progress...

Just spreading the news... The Clinical Information Modelling Initiative met again recently and the minutes are now available from the early but rapidly evolving CIMI wiki site - http://www.cimiwiki.org

Intro from the latest CIMI minutes:

CIMI held its 5th group meeting in San Antonio from January 12 – 14, 2012. Over 35 people attended in person with an additional 5 participants attending via WebEx.

At this meeting, the group:

  • Established the criteria for membership and the process for adding members to the CIMI group
  • Authorised an interim executive committee
  • Determined a tentative schedule of meetings for 2012
  • Moved forward with the definition of the modeling framework
  • Formalized two task forces to begin the modeling work so that example models can be presented at the next meeting
  • Recognized the formation of a Glossary Group (lead to be announced)
  • Agreed to plans for utilizing existing tools to rapidly develop and test a candidate reference model and to create a small group of example CIMI models that build on the reference model work

Full Minutes are here

Are we there yet?

No, but we are definitely moving in the right direction... Conversations are happening that were uncommon generally, and downright rare in the US only 18 months ago. I've been rabbiting on for some time about the need for a 'universal health record - an application-independent core of shared and standardised health information into which a variety of 'enlightened' applications can 'plug & play'; thus breaking down the hold of the proprietary and 'not invented here' approach of proprietary clinical applications with which we battle most everywhere today.

So it was pleasing to see Margalit Gur-Arie's recent blog post on Arguments for a Universal Health Record. While I'm not convinced about the reality a single database (see my comments at the end of Margalit's post), I wholeheartedly endorse the principle of having a single approach to defining the data - this is a very powerful concept, and one that may well become a pivotal enabler to health IT innovation.

In addition, Kevin Coonan has started blogging in recent days - see his Summary of DCMs regarding principles of Detailed Clinical Models (aka DCMs). Now I know that Kevin's vision for an implementable HL7 DCM is totally different to the openEHR DCMs (=archetypes) that I work with. But we do agree on the basic principles about the basic attributes of these models that he has outlined in his blog post - it is quite a good summary, please read it.

Now these two bloggers are US-based - and this is significant because in the US there has been a huge emphasis on connecting between systems and exchange of document-based health information up until recent times. I view their postings as indicative of a growing trend toward the realisation that standardisation of clinical content is a necessary component for a successful health IT ecosystem in the (medium-longterm, sooner the better) future.

Note that "Detailed Clinical Models", is the current buzz phrase for any kind of model that might be standardised and shared but is also used very specifically for the HL7 DCMs currently in the midst of an interminable ballot process and the Australian national program's DCMs, which are actually openEHR archetypes being used as part of their initial specification process. "Detailed Clinical Models" is being used in many conversations rather blithely and with many not fully understanding the issues. On one hand it is positively raising awareness of our need to standardise content and on the other hand, it is confusing the issue as there are so many approaches. See my previous post about DCMs - clarifying the confusion.

It is worth flagging that there has been considerable (and I would also venture to say, rather premature) effort put in by a few to formalise principles for DCMs in the draft ISO13972 standard (Quality Requirements and Methodology for Detailed Clinical Models), currently out for ballot. My problem with this ISO work is that the DCM environment is relatively immature - there are many possible candidates with as many different approaches. It is also important to make clear that having multiple DCMs compliant with generic principles outlined in an ISO standard may mean that the quality of our published silos of "DCM made by formalism X" and "DCM made by formalism Y" models might be of higher quality, but it definitely will not solve our interoperability issues. For that you need a common reference model underpinning the models or, alternatively, a primary reference model with known and validated transformations between clinical model formalisms.

The more recent evolution of the CIMI group is really important in this current environment. It largely shares the principles that Kevin, openEHR and ISO13972 espouse - creation of standardised and shareable clinical content models, bound sensibly to terminology, as the basis for interoperability. These CIMI models will be computable and human readable; they will be based on a single Reference Model (yet to be finalised) and common data types (also yet to be finalised), and utilising the openEHR Archetype Definition Language (ADL) 1.5 as its initial formalism. Transformations of the resulting clinical models to other formalisms will be a priority to make sure that all systems can consume these models in the future. All will be managed in a governed repository and likely under the auspice of some kind of an executive group with expert teams providing practical oversight and management of models and model content.

Watch for news of the CIMI group. It has a influential initial core membership that embraces multiple national eHealth programs and standards bodies, plus all the key players with clinical modelling expertise - bringing all the heavy lifters in the clinical modelling environment into the same room and thrashing out a common approach to semantic interoperability. They met for 3 days recently prior to the HL7 meeting in San Antonio. The intent (and challenge) is to get all of this diverse group singing from the same hymn book! I believe they are about to launch a public website to allow for transparency which has not been easy in these earliest days. I will post it here as soon as it is available.

Maybe the planets are finally aligning...!

I have observed a significant change in the mind sets, conversations and expectations in this clinical modelling environment, over the past 5 years, and especially in the past 18 months. I am encouraged.

And my final 2c worth: in my view, the CIMI experience should inform the ISO DCM draft standard, rather than progressing the draft document based on largely academic assumptions about clinician engagement, repository requirements and model governance - there is so much we still need to learn before we lock it into a standard. I fear that we have put the cart before the horse.

To HIMSS12... or bust!

This blog, and hopefully some others following, will be about my thinking and considerations as I man an exhibition booth at the huge HIMSS12 conference for the first time next month… Well, we’ve committed. We’re bringing some of the key Ocean offerings all across the ocean to HIMSS12 in Las Vegas next month. If it was just another conference, I wouldn’t be writing about it. But this is a seriously daunting prospect for me. I’ve presented papers, organised workshops, and run conference booths in many places over the years – in Sarajevo, Göteborg, Stockholm, Capetown, Singapore, London, Brisbane, Sydney, Melbourne – but this is sooooooo different!

The equivalent conference here in Australia would gather 600-800 delegates, maybe 40-50 exhibition booths. Most European conferences seem to be a similar size, admittedly these are probably with a more academic emphasis, rather than such a strong commercial bent, which might explain some of the size difference. By comparison, last year’s HIMSS conference had 31,500 attendees and over 1000 exhibition booths – no incorrect zeros here - just mega huge!

I can’t even begin to imagine how one can accommodate so many people in one location. I have never even visited HIMSS before – we are relying heavily on second hand reports. You may start to understand my ‘deer in headlights’ sensation as we plan our first approach to the US market in this way.

Ocean's profile is much higher elsewhere internationally. Our activity in the London-based openEHR Foundation and our products/consulting skills have a reasonable profile in Australia and throughout much of Europe; and awareness is growing in Brazil as the first major region in South America. In many ways the US is the one of the last places for openEHR to make a significant impression – there are some pockets of understanding, but the limited uptake is clearly an orthogonal approach to the major commercial drivers in the US at present, however we are observing that this is slowly changing... hence our decision to run the gauntlet!

openEHR’s key objective is creation of a shareable, lifelong health record - the concept of an application-independent, multilingual, universal health record. The specification is founded upon the the notion of a health record as a collection of actual health information, in contrast to the common idea that a health record is an application-focused EHR or EMR. In the openEHR environment the emphasis is on the capture, storage, exchange and re-use of application-independent data based on shared definitions of clinical content – the archetypes and templates, bound to terminology. In openEHR we call them archetypes; in ISO, similar constructs are referred to as DCMs; and, most recently, there are the new models proposed by the CIMI initiative. It’s still all about the data!

So, we’re planning to showcase two products that have been designed and built to contribute to an openEHR-based health record - the Clinical Knowledge Manager (CKM), as the collective resource for the standardised clinical content, and OceanEHR, which provides the technical and medico-legal foundation for any openEHR-based health record – the EHR repository, health application platform and terminology services. In addition, we’ll be demonstrating Multiprac – an infection control system that uses the openEHR models and is built upon the OceanEHR foundation. So Multiprac is one of the first of a new generation of health record applications which share common clinical content.

This will be interesting experience as neither are probably the sort of product typical attendees will be looking for when visiting the HIMSS exhibition. So therein lies one of our major challenges – how to get in touch with the right market segment… on a budget!

We are seeking to engage with like-minded individuals or organisations who prioritise the health data itself and, in particular, those seeking to use shared and clinically verified definitions of data as a common means to:

  • record and exchange health information;
  • simplify aggregation of data and comparative analysis; and
  • support knowledge-based activities.

These will likely be national health IT programs; jurisdictions; research institutions; secondary users of data; EHR application developers; and of course the clinicians who would like to participate in the archetype development process.

So far I have in my arsenal:

  • The usual on-site marketing approach:
    • a booth - 13342
    • company and product-related material on the HIMSS Online Buyers Guide; and
    • marketing material – we have some plans for a simple flyer, with a mildly Australian flavour;
  • Leverage our website, of course;
  • Developing a Twitter plan for @oceaninfo specifically with activity in my @omowizard account to support it, and anticipating for some support from @openEHR – this will be a new strategy for me;
  • And I’m working on development of a vaguely ‘secret weapon’ – well, hopefully my idea will add a little ‘viral’ something to the mix.

So all in all, this will definitely be learning exercise of exponential proportions.

To those of you who have done this before, I’m very keen to receive any insight or advice at this point. What suggestions do you have to assist a small non-US based company with non-mainstream products make an impact at HIMSS?

Clinical Knowledge Repository requirements

I've been hearing quite a lot of discussion recently about Clinical Knowledge Repositories and governance. Everyone has different ideas - ranging from sharing models via a simple subversion folder through to a purpose-built application managing governance of combinations of versioned knowledge assets (information models, terminology reference sets, derived artefacts, supporting documentation etc) in various states of publication. It depends what you want to achieve, I guess. In openEHR it became clear very quickly that we need the latter in order to provide a central resource with governance of cohesive release sets of assets and packages suitable for organisations and vendors to implement.

In our experience it is relatively simple to develop a repository with asset provenance and user management. What is somewhat harder is when you add in processes of collaboration and validation for these knowledge assets - this requires development of review and editorial processes and, ideally, display transparency and accountability on behalf of those managing the knowledge artefacts.

The most difficult scenario reflects meeting the requirements for practical implementation, where governance of configurable groups of various assets is required. In openEHR we have identified the need for cohesive release sets of archetypes, templates and terminology reference sets. This can be very complicated when each of the artefacts are in various states of publication and multiple versions are in use in 'on the ground' implementations. Add to this the need for parallel iso-semantic and/or derived models, supporting documents, and derived outputs in various stages of publication and you can see how quickly chaos can take over.

So, what does the Clinical Knowledge Manager do?

  1. CKM is an online application based on a digital asset management system to ensure that the models are easily accessed and managed within a strong governance framework.
  2. Focus:
    1. Accessible resource - creation of a searchable library or repository of clinical knowledge assets - in practice, a ‘one stop shop’ for EHR clinical content
    2. Collaboration Portal - for community involvement, and to ensure clinical models that are ‘fit for clinical use’
    3. Maintenance and governance of all clinical knowledge and related resources
  3. Processes to ensure:
    1. Asset management
      1. uploading, display, and distribution/downloading of all assets
      2. collaborative review of primary* assets  to validate appropriateness for clinical use
        1. content
        2. translation
        3. terminology binding
      3. publication life cycle and versioning of primary assets
      4. primary asset provenance, differential and change log
      5. automatic generation of secondary**/derived assets or, alternatively, upload and versioning when auto generation is not possible
      6. upload of associated***/related assets
      7. development of versioned release sets of primary assets for distribution
      8. identify related assets
      9. quality assessment of primary assets
      10. primary asset comparison/differentials including compatibility with existing data
      11. threaded discussion forum
      12. flexible search functionality
      13. coordinate Editorial activity
      14. share notification of assets to others eg via email, twitter etc
    2. User management
    3. Technical management
    4. Reporting
      1. Assets
      2. Users
      3. Editorial activity support

In current openEHR CKM the assets, as classified above, are:

  1. *Primary assets:
    1. Archetypes
    2. Templates
    3. Terminology Reference Set
  2. **Secondary assets:
    1. Mindmaps
    2. XML transforms
    3. plus ability to add transforms to many other formalisms, including CDA
  3. ***Associated assets:
    1. Design documents
    2. References
    3. Implementation guides
    4. Sample data
    5. Operational templates
    6. plus ability to add others as identified

While CKM is currently openEHR-focused - management of the openEHR artefacts was the original reason for it's development - with some work the same repository management, collaboration/validation and governance principles and processes, identified above, could be applied for any knowledge asset, including all flavors of detailed clinical models and other clinical knowledge assets being developed by CIMI, or HL7 etc. Yes, CKM is a currently a proprietary product, but only because it was the only way to progress the work at the time - business models can always potentially be changed :)

It will be interesting to see how thinking progresses in the CIMI group, and others who are going down this path - such as the HL7 templates registry and the OHT proposed Heart project.

We can keep re-inventing the wheel, take the 'not invented here' point of view or we can explore models to collaborate and enhance work already done.