Archive for January, 2011
Over the years I’ve had a few ideas how we could improve the gathering of intelligence for an emergency, and how this could be linked into both situation reporting and as an action tracking tool for operations and other general management tasks.
Most of this thought has been directed at how to implement it in a Sahana product, but it could conceivably be applied to any product for emergency management, or even broader business/organisation management. Aspects of this solution already exist across both Ushahidi and Sahana – but neither provide the comprehensive solution yet.
This need was also something that I saw during my involvement in Building Safety Evaluation during the Canterbury Earthquake in 2010. It is also potentially a far more robust means of managing the collection of intelligence associated with an emergency.
I’ll talk about this using a modular approach – making the assumption that different groups of users will have different types of access, to ensure the protection of submitted information (see here for a reason why).
Capturing the Information
The first module would be the Intelligence Gathering module. It is where information from other sources is collected from other sources.
In the CrisisCommons context, this might allow an anonymous volunteer to submit cut-and-paste text from a news article on a website or from a situation report. For the purpose of this concept, I’m going to ignore the copyright issue – but do want to flag that this may be an issue with the collection of information from the media where a lot of information is copyright.
For Building Safety Evaluation, this may be the unstructured reports that we received that ‘the wall on this building looks like it is about to fall on a neighbouring building’.
So this module basically allows for the entry and recording of what is mostly unstructured information – it may be from a website, a phone call, SMS, even a scribbled piece of paper that someone passes you in the EOC.
Such a system could be easily configured to allow members of the public, or crowdsourcing volunteers to enter such information without having to register or have an account – thereby keeping the barrier to collecting raw information low.
Adding Structure and Metadata
Having this information in digital form is just the start however, the next step is to get a team of trusted individuals to then review the submitted information and critique it for quality, actionability, and credibility. At they same time they would ideally try and add other metadata to the record.
Does it currently contain a freeform address? Then the reviewer would then associate an address with the record, and this would properly structure the address information. If a geocoder is available then latitude and longitude records would also be entered.
We had some issues in Christchurch whereby some addresses that were reported are not official addresses recognised by the council property system – this usually happens with ‘vanity’ address. Our Kestrel office in Kestrel has a common/vanity address of 35 Riccarton Road, but for council and utility purposes, our building is actually 39 Riccarton Road. I spent a bit of time in September, and again following the Boxing Day aftershock checking some of the incoming addresses that were provided, and then record a Council GIS identifier once we had correctly identified the address. Again, this would be another means of tagging the raw data with something that adds valuable metadata to incoming information.
Does it have a phone number? Associate a structured phone number.
What does the record refer to? Add tags from a controlled taxonomy so that the record can be filtered – e.g. if the record refers to building damage, it may be tagged with ‘building evaluation’. If it is a report of a missing person, it should be tagged with ‘missing person’.
This is perhaps the most time-consuming part, but it is also most required, as it opens up a lot more potential for actually managing and sharing the information.
Wrapping it all in Management Tools and Reporting
Now we can finally get down to the crux of what we’re trying to achieve – take raw unstructured information and provide it in a form that information systems can understand it, present it, and search it in far more valuable ways.
If we assume that all incoming information about building safety is tagged with ‘building evaluation’ then we can provide a web page that allows someone in the building safety evaluation team to review all the incoming reports that are relevant to them.
At this point we go the final step, as we start allowing people in these focused teams to start associating actions and history to the original record. You may have a small team with Build Evaluation reviewing incoming records for ‘building evaluation’ when they see it, because the address (and potentially lat/long and council identifiers), it should be trivial to see if other records have been entered that refer to the same building, or nearby buildings. Without adding this metadata previously, it would be a lot harder to automated some of this information management.
We can then link multiple records that refer to the same building – such as different reports over time, or a neighbouring property that may refer to the building.
The best part though, is when we start adding actions – for example, if an Urban Search and Rescue Team is tasked to a building, then that action (Sending a team to perform an intial rapid building assessment) can be associated to that building and the team, and of course the original record that reported it. This means that if someone enquires if anything is being done, we have the history of who was tasked where and when.
When the team returns, we can mark the action as completed – we have a record that it has been completed. Not only that, but any quick comments from the team could be added as a new record associated with that building. Likewise, any digital photos, or even scanned copies of the rapid building damage assessment forms could be attached.
Scanned forms are of course interesting, as you could scan them initially and add them to the system as images, but also flag those to be reviewed to create metadata so that the form data is now accessible via metadata – such as the building status as determined by the assessment Safe/Green, Restricted/Yellow or No Access/Red. Whilst Optical Character Recognition (OCR) could speed this process, after seeing the handwriting of engineers, I’d suggest that human review and triage of key information on the forms would get more usable information into the system sooner. And yes – the idea is of course a tablet application that digitises the information in the field and uses an Emergency Data eXchange Language (EDXL) extension to submit the information via EDXL-DE back to a server.
Of course, with all this structured metadata now wrapped around the original unstructured reports – this opens up so much potential for reporting and where appropriate sharing this information using standards such as the EDXL for achieving true information interoperability.
This is something that both Ushahidi and Sahana have been working since the response to the Haiti earthquake when we were trying to provide management tools in Sahana Eden to wrap around the crowdsourced information that was being collected by Ushahidi.
Recently discussion on the CrisisCommons email list raised an issue about security pertaining to crowdsourced data – and the ease with which the information can be deleted by an anonymous malicious individual when using tools such as etherpad or Google Docs with open editing rights.
In this case an anonymous user was deleting data as quickly as it was entered in a shared public document. What is a more concerning risk is perhaps the subtle editing of crowdsourced information, where the edits are not obvious enough to be detected – such as the subtle and malicious modification of facts and figures.
For tech volunteers, there is a careful balance to be struck between protecting information (in this particular case its availability and integrity) and not creating significant barriers to entry.
The first obvious solution is that access on the document be restricted to authorised users. This means that only those individuals that are trusted can be expected to contribute to the collection and management of unstructured crowdsourced information.
This is less than ideal as it means that new users that volunteer immediately following an emergency haven’t developed a trust relationship with, for example, the CrisisCommons community, and are unable to immediately contribute.
I believe that with the simple use of a two-tier approach, one can easily protect the quality of the final document(s), whilst still making it easy for new volunteers to contribute.
You effectively create two types of document:
- Public and open documents – which are open to all to edit, and are effectively a rough scratchpad for collecting unstructured information.
- Trusted documents – which are open for only a limited pool of trusted users to edit, but draw from the content provided in the public and open documents.
The trusted editors effectively become the curators of the information, and once content has been copied and edited from the open documents, malicious anonymous users won’t be able to waste other volunteers time through deletion or editing.
There are other process benefits to this approach. For example, you may create a public document particular topics of the emergency – such as infrastructure, health/medical and background information (e.g. weather forecasts, population demographics etc) and these multiple individual documents may map to a single section within the trusted document to produce an edited and trusted version of crowdsourced information.
Still, from an operational perspective, this is a far from ideal approach, and there are certainly more robust approaches available to turn this into a process that can be used for intelligence gathering and situation reporting.