Archive for the ‘disaster’ tag
Here’s another CAP idea I wanted to get out before I read a document I’ve been sent that may cover the same topic (just to make sure I don’t potentially draw on someone else’s idea). This concept came to me, again, last year whilst I was working on the CAENZ Public Alerting research report last year (I’m still waiting for this to be publicly released so I can link to it). My recent post proposing a browser plugin for CAP alerts is part of this bigger picture I am outlining today.
The background for it came from the realisation that there are a significant number of organisations in New Zealand that are responsible for the publication of alerts – whether to a secure group, or the general public. For example, there are 16 CDEMG Groups, 70 odd local authorities, GeoNet, MetService, Police and those responsible for infrastructure such as roads, and the Centre for Critical Infrastructure Protection.
Each of these agencies would need some means of hosting a CAP server, and incorporating some means of resilience into their CAP server(s). Given that there are potentially such a large number of CAP servers required, there are some aspects that could provide a strong and robust CAP network without seeing a proliferation of potentially fragile CAP servers. This is all built on the concept of a secure peer-to-peer network of CAP servers.
It should be possible to federate a group of CAP servers into a cluster. If we take a CDEM Group as an example, the group members may elect to deploy say 4 or 5 CAP servers to create a peer-to-peer network providing CAP alert hosting for the CDEM Group. Any authorised CAP message posted to one of the federated CAP servers would automatically distributed the CAP message to the other CAP servers in the federation. In this manner, the CAP message is instantly distributed and made available to other servers in the federation.
I believe that the more robust approach to developing a CAP network is to base it upon peer-to-peer network technology, although tweaked to provide a secure means of publishing messages to the network These servers could of course be deployed any way to provide maximum resilience, and may be located close to major New Zealand Internet backbones, and quite possibly well outside of their geographic region. This has two potential benefits for resilience. Firstly, the message is available from multiple servers, so that the load (particularly for publicly accessible CAP servers) can be distributed across the multiple servers automatically. Secondly, should any particular server fail, the messages will still automatically be provided from the other CAP servers in the federation.
One example means by which this could be deployed is the following.
Provide a national CAP server network of federated CAP servers at key points – a nationally managed set of strategically located CAP servers. For example, Government internal CAP servers would be most likely located on the Government Shared Network (GSN) or whatever comes out of the recent restructure of this service. Public servers may be spread around both by geography and ISP (e.g. key ISPs may host a CAP server for their customers). In all circumstances these would fallback to other CAP servers in the federation in case of their failure.
Naturally, the open approach applied to peer-to-peer file sharing is not appropriate for a trusted network CAP service. To create a more secure network, something like a two-tier approach may be necessary.
CAP Publishing Servers
Private CAP publishing servers may be utilised to act as the publishing gateway to the public read-only peer-to-peer network provided by the CAP Read-only servers. Authentication, encryption and/or digital signing should be used as the basis to authorise the publication of a CAP message via the publishing server. The publishing server is responsible for verifying the digitally signed CAP alert, as well as the authentication details to verify the user is authorised to post the alert. Once authorised, the CAP publishing server publishes the alert to the road-only servers. This is the only channel to publishing CAP alerts to the network. Some form of CAP writing software (or service) may be useful for creating CAP messages and then publishing them to the servers. One protocol that may be useful for publishing is Atom as suggested by this IBM article.
CAP Read-only Servers
These are the user-facing servers that provide CAP messages to their end users. Only the CAP publishing servers are authorised to publish CAP messages to the peer-to-peer network for dissemination.
Naturally, this concept is part of a larger plan to build a CAP framework, and the circle would be able to be partly completed by designing web browser plugins that are capable of connecting to the peer-to-peer CAP read-only servers.
Widespread deployment of CAP browser plugins may mean that traditional servers may not be capable of supporting tens or hundreds of thousands of CAP clients regularly checking for new alerts. A peer-to-peer approach will probably provide the most scaleable and robust approach to disseminating CAP alerts via the Internet.
I haven’t blogged about Sahana for a long time, and I’ve got plenty to write. So much that I can’t decide where to start, so I’m going to pick a nice small piece to start with.
Last year, I was involved in a project in New Zealand to produce an investigative report on Public Alerting Systems with the New Zealand Centre for Advanced Engineering. This report will hopefully soon go public, and I’ll provide a link when it does.
This report was looking at the different technological solutions for getting alerts out to people in as timely a manner as possible. At one point in the search for different systems, we started discussing means of injecting HTML in web pages via an ISP, so that a public alert could be sent out to anyone on the Internet. I’ll talk about this and other options later. Let me get to the point of this post.
After starting at the HTML injection idea, and progressing through a few others, I reached a kind of natural conclusion that a more suitable means of alerting users via a web browser would be a browser plugin that can subscribe to Common Alerting Protocol (CAP) feeds, and when a relevant alert comes in via CAP, this is displayed to the user in their browser using the XUL:notificationbox at the top of the webpage.
Anyway, a possible idea for a Google Summer of Code 2009 project is that of constructing a browser plugin for Firefox that implements this alerting capability, and expanding Sahana to support full publishing of CAP alerts. Here are some features it could/should support.
- Bundle publicly available CAP feeds (ideally listed in a nice Country/State taxonomy – this will make it easy to discover and utilise existing CAP services.
- Allow users to optionally register location in some manner, so as the plugin can identify relevant alerts (by location) and give them higher status than say remote alerts. Users should be able to register multiple locations – whether it has home & work, or multiple cities. Privacy is of course king and this information must be protected.
- Provide a means of adding additional user provided CAP feeds to the plugin.
- Provide the ability to open the alert in a new tab and format in a human-readable manner, including niceties such as embedding Google Maps to show geospatial information and links back to the source website of the alert for verification.
- Implement means of verifying messages that are digital signed, and decrypting encrypted messages.
- Implement a CAP feed in Sahana so that Sahana can act as both a producer (in terms of creating a CAP message) and a publisher (in terms of making it available via a CAP RSS/Atom feed).
- Implement a CAP proxy or similar, so that say all users of a Sahana server can obtain CAP alerts directly from the Sahana server – rather than going to an external website. This may be useful for distribution of alerts within an organisation or centre without having every client browser connecting to an external server.
What would be very nice, but may be beyond the capabilities of Sahana servers currently, is making the CAP service on a Sahana server easily discoverable on a LAN via zero-conf services such as Bonjour.
Draft Outcomes for Assessment
The outcome of such a project would be to produce a working solution whereby a Firefox Browser plugin is capable of working with public CAP alerts and that CAP within Sahana is capable of fully acting as a CAP server via RSS/Atom feeds to the CAP alerting plugin.
- Implement the specified requirements
- The browser plugin works as expected with publicly available CAP feeds.
- The browser plugin works as expected against the Sahana demo server. (Yes, this means that your modifications to CAP on SahanaPHP need to be implemented).
- Implement the Sahana CAP server in SahanaPY
- Provide one or more standalone CAP clients for a mobile platform e.g. Google Android, Apple iPhone/iPod Touch etc
- Write an Internet Explorer plugin with similar functionality – it is important that this functionality is also provided for IE given its widespread usage and deployment.
Whilst the plug-in can and should operate completely independently of Sahana, it should also be designed to work well with Sahana servers (e.g. SahanaPHP and SahanaPY).
Anyway, this is just an idea I wanted to float and get out in the community for discussion. I’d welcome any further comment or ideas to build upon this!
Since buying into the iPhone ‘cult’ back in July, I have been intrigued as to the applications that will be released for the it that have relevance to emergency managers. One I’ve just discovered (via TUAW – check out their pics of the app) is one called Hurricane. Whilst this was released last year, it has since been updated to incorporate new functionality.
The price is reasonable (USD$3.99) if you want quick access to storm information in your phone at all times. It sounds as though when there are active storms, that when opened the app will come up with a quick list of current storms to provide quick access to more information. Outside of that, it has a record of past storms, as well as quick access to satellite view.
Sure, much of this information can easily be obtained for free, but the benefits of an application such as this indicate the an application wrapper that makes it fast and transparent to get the information you’re after. The only thing that I can think of at a quick glance would be also linking to the text watches and warnings from the NOAA Storm Prediction Centre.
It is going to be exciting to see what applications are released in the coming years that provide quick access to both remote and locally stored emergency management information!
In a decision that will probably frustrate some Aucklanders, it has been announced the Whenuapai Airport will remain in the hands of the NZ Defence Force. This is probably the best outcome, as it will ensure that the field remains as an emergency alternative airstrip in case anything happens to Auckland International in Manukau. Whilst Auckland probably doesn’t need a second commercial airport, you never know when you might need an alternate airstrip during an emergency.
I’ve only recently started following the NZ Health WebEOC blog, but it is exciting to see this sort of information sharing taking place. Congratulations to Charles and the team for the work involved. I found in their feed today an article about the Ministry of Health suffering from the recent Conficker worm outbreak over the past few days. There is more info here from Computerworld.
First, what is Conficker? From Wikipedia.
Conficker disables a number of system services such as Windows Automatic Update, Windows Security Center, Windows Defender and Windows Error Reporting. It then connects to a server, where it receives further orders to propagate, gather personal information, and downloads and installs additional malware onto the victim’s computer. The worm also attaches itself to certain critical Windows processes such as svchost.exe, explorer.exe and services.exe.
What is interesting is that the security hole that Conficker utilises to gain control of the Windows operating systems was plugged in a security patch released on 23 October 2008. That means in theory that all those systems that have been compromised in the past week were systems that had not had the patch applied that was released in late October. The security patch to protect against Conficker-like attacks for Windows 2000, Windows XP and Windows Server 2003 was marked as critical and should have been installed in a timely manner.
What are some lessons from an emergency management and business continuity perspective?
1. If you’re running Microsoft Operating Systems – you must keep them patched, and do it in a timely manner. Windows represents the largest near-homogenous family of operating systems in the world. This makes them the primary target for the developers of botnets and malicious software. Whilst I recognise that it takes time to deploy patches in a large organisation such as the Ministry of Health – an organisation will always be at risk if it doesn’t install security updates in a timely manner. All Microsoft ‘Critical’ patches should be patched within weeks of release.
2. Where possible, organisations should attempt to diversify the installed base of operating systems in an organisation. If you solely run Microsoft operating systems then a worm has the potential to take down an entire organisation. If you run a heterogeneous computing environment that has a variety of operating systems (e.g. Windows, Unix and OS X), then any outbreak of malicious software will only directly impact some of the systems. In our small business I support all three of these platforms. We have Windows and OS X clients, and servers on Linux, OS X Server, OS X Desktop, and this is one of the main reasons I refused to deploy solely Windows software for client and server when setting up our business. Reliance on a homogeneous computing environment decreases overall IT resiliency.
3. Emergency Management Information Systems (EMIS) should ideally be able to be segregated from the production systems. Malicious software doesn’t have to infect a system to have an impact on it. Even if the malicious software just consumes 100% of the network bandwidth, that will be enough to create a continuity issue by denying access to critical systems – such as servers. Therefore, EMIS should really be configured on a separate network so that even if the internal network bandwidth has been fully consumed, and access to the Internet severely restricted to limit the spread, critical systems can still be provided to the wider world. Network segmentation can be used to limit the impact upon critical systems. Direct access to the emergency network segment could be provided from network jacks in the EOC. Once again, these should be on an entirely independent network segement to ensure that emergency operations can continue during an outbreak of malicious software on the main LAN.
Finally, emergency managers should also make themselves aware of the Centre for Critical Infrastructure Protection (CCIP), and consider signing up for vulernability alert emails. These are sent out for critical advisories associated with information security risks, and can be good prompts for getting in touch with IT, and making sure that your systems are patched and up-to-date.
Update 2009-01-27: I see that the Manager of the CCIP went public yesterday saying the CCIP advised MOH of the security patch in October. The real question is whether the Ministry has custom applications installed on all its systems (e.g. including clients), or if they are just talking about server applications. If most of the desktops are only running Office and a groupware application such as Outlook or Notes, then they should have been able to be relatively easily patched before December. It is well recognised that patching servers running legacy applications takes longer to test for complications before deploying patches.
I’ve been involved in some discussions in the past few days about the use of Twitter for emergency management purposes. It’s something I’ll write about in more detail and rigour in the next wee while, but I just want to get a few links to article out there in the meantime.
This GovTech article spawned the discussion on the IAEM email list. Twitter is certainly not a robust notification system, but it is a social messaging system that does have its place – particularly for interacting with the public.
Concerns were raised about how Twitter usernames could masquerade as offical agencies, and other issues around the authority of information provided on Twitter.
In reply on the list, I made the following brief comments that may help an agency adopt and utilise a social network such as Twitter and mitigate some of the issues.
Some valid concerns about the risks, but there are always means of mitigating them.
1. Re: Globalisation – one of the biggest issues you missed is that of privacy and the protection of private information submitted and stored in these systems. Ironically, the United States is one of the few civilised countries that doesn’t provide wide-ranging privacy protections when compared to European countries and the likes of New Zealand that have very strong privacy legislation. The way information submitted to social networking sites vary significantly depending on the jurisdiction it is hosted in. As many sites are hosted in the United States, it would indeed be good to see the United States implement stronger legislation protecting personal information (e.g. to the level provided in Europe and New Zealand, not sure on Canada, and I think Australia might fall somewhere between US and NZ).
2. As per any form of public alerting/notification, it is important to teach the receiver that they should attempt to cross-check, verify, and go hunting for more information. One technique that was mentioned in the Govtech article linked earlier in this thread was using TinyURL to embed links to official websites to provide corroboration of information, or more detailed information than can be wedged into the 140 characters provided by Twitter. Likewise, agencies should put pages up on their websites that act as a means to identify their official Facebook page, official Twitter username etc. Not only can they point out their official Twitter username for example, but they could also identify usernames that may be masquerading as that organisation. You could use the Twitter > Profile > More Info URL to link back to this page on a web site that the agency controls. It is still not perfect, but it would provide a far more robust approach for providing evidence that a given Twitter username does represent an official person/agency.
3. Official and unofficial directories of usernames can be provided e.g. <http://govtwit.com/> These can be constructed and the people/organisations using them can contact the organisations to verify that they do indeed manage that username. This, again, allows for a far more trustworthy list of official representatives to be constructed. A state EM organisation for example could maintain a web page on their official website that lists all the official EM and related agencies Twitter usernames in that state. As long as you have a trusted representative constructing the directory, there is less concern about those usernames in the directory as they will perform the authentication for you. E.g. IAEM may elect to build a register and maintain it on our website.
4. If an agency finds someone masquerading as their organisation, they can always approach say Twitter, and highlight the problem username and that they do something about it. Twitter is a private company in San Francisco. E.g. if the unofficial usdhs Twiiter username started spreading false information during an emergency, I’m sure a call from DHS to Twitter in San Francisco would fix that fairly quickly.
The whole idea of social networks is that you build your own network of trust. This means that there is some work associated with constructing it, but there are a number of means to build this web of trust – some of which I’ve mentioned above. Link with other official agencies, link to it from your official websites that you control. Fake usernames will not be able to compete with this and will quickly be identified as fakes as they will not be able to build up a web-of-trust.
And yes, social networks are not for secure communication. They are to get information out and widely disseminated as quickly as possible.
One reason sites like Twitter have become so popular with the public is because they can get information quicker than we, as emergency managers, are able to otherwise provide it. That sends a pretty strong message that we need to do better in terms of getting information out to the public.
I’ll try and expand on this in the not-to-distant future – I might end up writting an article for the IAEM Bulletin. As an aside, a related topic is how to use tags to identify emergency management related posts on a social network site such as Twitter. I’ve passed this on to the EIIF W3C Incubator Group I’m involved with as I believe that any tagging structure needs to be compatible with other standards used for emergencies and disasters. This way software could watch out for certain tags to pick them up and into a disaster management system such as Sahana.
Once again the key point is trying to create an integrated approach to an emergency management information system (EMIS) – the software is only half the deal, the other half is the suite of information standards to communicate with other systems. Any tags designed for Twitter, much be designed in a way that an EMIS can search, gather and try to understand them.
I originally wrote this article for The Box, the Tuesday Technology section of The Press in Christchurch, New Zealand – it appeared on the 23rd September 2008. It also appeared on Stuff.co.nz.
Have you ever wanted to quickly find all the photos taken at your family bach? Chances are that unless you’ve been meticulous in filing your photos or tagging them with keywords, this could take quite a bit of time. Wouldn’t it be easier if you could click on your bach on a map, and bring up all of your photos within 1km, or display all of your holiday photos on a map? This is the promise of geotagging.
Simply put, geotagging records the latitude and longitude of the camera at the time the photo was taken and stores it in the image file.
Geotagging is not a new technology. It has been possible to geotag images for several years now, but previously only enthusiasts or professionals geotagged their photos, as it added extra steps to processing photos. It also required a GPS receiver that could record tracks – a breadcrumb trail of where the GPS had been. By matching the time in the GPS track log with the time that photos were taken, it is possible to reasonably estimate where the photo was taken. This took extra time and effort, and except for a dedicated few it was not worth the effort. The GPS receivers also added extra weight and bulk to carry around.
Recently GPS functionality has greatly shrunk in size and power demands, making it more friendly for the photographer by enabling the technology to be directly embedded in cameras and mobile phones. Already a number of camera phones support geotagging photos – including much of the Nokia N series, and the recently released Apple iPhone 3G. Nikon has embedded a GPS receiver in their new Coolpix P6000 compact, and provide an optional GP-1 GPS attachment for recent Nikon digital SLR cameras. As more devices support geotagging, especially more affordable cameras and mobile phones, the possibilities (and the risks) are going to grow exponentially.
Combining GPS receivers with other devices makes the whole geotagging process transparent and automatic, and requires no effort from the user. This is going to rapidly open up opportunities for all sorts of geotagged data. But is it just technology for technologies sake? Not really, there are existing applications for geotagging, and even more to be come.
Travel photography just begs for geotagging. It is a great means of recording where holiday snapshots were taken, as you can easily show people where you took the photos. Not only that, but as people upload geotagged photos, they become a great travel planning tool as you can see photos that other have taken in a location that you are travelling to, and find sights nearby that you otherwise may have missed.
Real estate also stands to gain from geotagged photos – by being able to quickly load photos of properties for sale into an online searchable map, it will be easier to browse location and appearance at the same time. Councils and infrastructure companies have been using geotagged images for a number of years now to assist with managing assets – imagine being able to take a photo of a pothole and send the image to the council without having to try and explain where it is. Geotagging even has applications after a disaster – teams performing reconnaissance of an affected area can take geotagged photos whilst they are there, and when they return to an operations centre, the images and their exact location can be loaded into a mapping system to help authorities gain a better understanding of the extent of damage.
Location-based technology does come with inherent risks – mostly privacy related. Although many people are comfortable posting photos online, they may not be comfortable allowing people to determine the location where the photo was actually taken. This may be particularly relevant in the case of photos taken at home. No doubt we will see tools evolve to help people manage the privacy associated with geotagged photos, but in the meantime it is worth thinking about the content of a photo before uploading geotagged photos online.
The benefits of geotagging for the most outweigh the risks, and will likely lead to novel applications. The Apple iPhone 3G already has interesting applications taking advantage of geotagged images. Exposure provides mobile to Flickr – a photo sharing website. The ‘Near Me’ function will get your current GPS co-ordinates from the iPhone, and use that to display geotagged photos from Flickr that were taken near your current location. You can then view a photo, plot its location on a map, and if you desire, Google Maps will give you directions on how to get there.
Geotagged images don’t have to be shared online to reap the rewards – it may be that the biggest benefit is just providing another means of managing the vastly expanding data in your own photo library!
Yes, it is an opinion article, but it is good to see coverage of Sahana in the Wall Street Journal.
A team of IBM developers customized and translated Sahana software, a free, open-source disaster-management system, into simplified Chinese to coordinate relief efforts in Chengdu.
The recent announcement of the second generation iPhone has a large number of people buzzing. The inclusion of Global Positioning System (GPS) capabilities into the phone creates a very capable mobile computing platform that has a lot of potential for emergency management.
What features make it a potentially useful tool for emergency managers?
- Large storage – 8-16GB. Plenty of room for photos, documents and other material in a slim and very portable device.
- Excellent user interface. I’ve been using an iPod Touch (iPhone less the phone) for 9 months now and have to say it is the nicest user interface I’ve used yet on a small device. I find it truly painful to use my Treo 750v mobile phone in comparison.
- Multi-method positioning. The upcoming iPhone will be able to use three different methods to locate the devices current position. First, and most accurately it will use the GPS. It will then fall back to wifi, listening for nearby wireless devices and looking these up from a georeferenced database over the Internet, If both of these fail, then the least accurate method of using the cell towers will be used.
- Multi-channel communication. The device will not only be able to connect via mobile carriers, but it the previous version it has wifi – at the minimum it could be used to connect to a local wireless LAN and access a Sahana server disconnected from the Internet.
Sure, there are some negatives too – it is a fragile device not necessarily suited to hazardous environments, and it doesn’t have replaceable batteries. Everything has limitations though and if these are recognised and accommodated, one could still achieve benefits from its usage.
Apple has also released a Software Development Kit (SDK) and infrastructure to allow software developers to write applications to run on the iPhone. This creates opportunities for development of tools that can be deployed for emergency management on an iPhone.
One example – as the iPhone has a GPS, camera, and means of connecting to the Internet (wifi or mobile) – it wouldn’t be too hard to write an application that could be made available for free download to citizen’s iPhones. Then, anytime they see say damage on the streets surrounding their home or work, they could take a photo, fill out some quick optional comments on a form, and submit the georeferenced photo and comments over the Internet to a Sahana server and instantly have the image geolocated for the emergency managers use. And, if the phone can’t make a connection due to failure or congestion, then the images are queued for delivery once communications are restored.
However, recent news of the iPhone SDK suggests that such an application would be in breach of the license agreement. I’m not a developer, but Electronista provides the following text from the license agreement, Section 3.3.7
applications may not be designed or marketed for real time route guidance; automatic or autonomous control of vehicles, aircraft, or other mechanical devices; dispatch or fleet management; or emergency or life-saving purposes.
I don’t have a problem with most of these – but the broad definition of emergency may stop deployment of emergency management applications on the iPhone. This is understandable from a liability perspective, but I hope it doesn’t stop developers creating ground-breaking emergency management applications using the potential of the iPhone.
Speaking of which, location-aware applications for the iPhone2 are already being displayed. Two very interesting ones to pop up so far are Loopt and OmniFocus. Very cool possibilites are opening up. Loopt is a location-aware social networking tool that lets you see if any of your friends are nearby so you can hook up for a meal or coffee. OmniFocus for the iPhone introduces location aware task lists. Near the office? Your office tasks pop up. Need to go to the grocery store to get item on your grocery list? It will provide directions.
It is going to be an exciting time for location-based services!
I’ve got a busy week ahead, but I just wanted to capture here some media coverage of the exercise whilst I stumble across it. One thing that is interesting – there are two major exercises taking place this week in which our Government is taking part in, with the UKUSA countries running Exercise Cyberstorm II to test their response to virtual attacks towards our critical infrastructure.Anyway, onto the news snippets.
- Herald Blog – Are we prepared for a major disaster? (all comments, newest first)
- Exercise Ruaumoko ends
- Emergency services fired up after ‘eruption’
- Shore could house eruption refugees (20080322)
- Remember Animals When You Prepare For Emergencies! (20080325)
- Civil defence heading in right direction (20080402)
- North at risk from Auckland disaster (20080402)