The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is mandated with the detection and alerting of earthquake, tsunami, volcanic, and landslide hazard events in the Philippines. Currently they are engaged with the Mayon volcano. Earthquakes are quite frequent, being surrounded by multiple trenches and sitting in the pacific ring of fire.
The CAP-on-a-Map project had involved PHIVOLCS in the capacity building and implementation exercises. However, there were delays in securing the required IT resources to operationalize SAMBRO. We are now working in developing and testing the CAP message templates. The aim is go-live by the second quarter of this year.
Showing posts with label sahana disaster management system. Show all posts
Showing posts with label sahana disaster management system. Show all posts
Friday, March 2, 2018
Friday, December 29, 2017
Sahana First Response Prototype is Ready
Wednesday, November 30, 2016
Spot contributions to policy paper wins first prize at ITU conference
We presented our paper on the “intricacies of implementing the ITU-T X.1303 recommended warning standard for cross-agency situational-awareness in Myanmar, Philippines, and Maldives at ITU Kaleiderscope (2016). It is an IEEE conference sponsored by the Telecommunication Standardization Bureau
(TSB). The paper was not technical in the strict sense but discussed
ICT policy relevant findings that the conference reviewers perceived
important for the standardization process and the standards community. We were awarded “the best paper”. READ THE FULL STORY.
Friday, September 25, 2015
Training of SAMBRO Master Trainers in Thailand
Sahana Alerting and Messaging Broker
(SAMBRO) continues to mature; especially with the Maldives, Myanmar,
and Philippine implementations. Trainees from the three countries
belonging to their Meteorological and Disaster Management Agencies are
receiving training. They will receive training on GIS concepts,
techniques, and tools required for developing predefined alert areas and
training on administering, configuring, and implementing the
CAP-enabled SAMBRO software. The training is part of the ‘CAP on a Map‘ project aiming to improve institutional responsiveness to coastal hazards.
Relevant Resources
Sunday, April 13, 2014
Practicalities of EDXL - the Sahana Case
Sahana EDXL Experience
"Was EDXL-DE ever used in any of the Sahana Disaster Management Software Products?” "Yes in EDXL-RM” (Resource Message); see the code snippet that includes EDXL-DE. EDXL-RM is specifically designed as payloads of the EDXL-DE. The development was for a client wanting to interoperate with WebEOC. ”The WebEOC implementation was being handcrafted in raw JavaScript and Eden’s native S3XML was seen as a simpler solution by the Client's own Software Developers, who were handling that side of the interface”. Further work on the EDXL-RM with the EDXL-DE wrapper was stopped until any new use-cases emerged.“At the National Library of Medicine (NLM), mainly at my behest, they do use EDXL-DE 1.0 as a wrapper.” It encases the triage data (text and photos) sent from the TriagePic applicationto the web site via web services. This was somewhat as a foundation for and in anticipation of use cases emerging for data interchange with state and local agencies, as well as FEMA. These use cases have been slow to emerge in practice. For app communication to the mothership, the overhead of the wrapper, while small compared to photo payloads, is still hard to justify if there’s no pay off. The NLM Webmaster is suggesting that alternative lightweight (but non-standard) wrappers and payloads, using JSON/REST more than XM, is the way forward
“Sahana Eden haven’t updated the existing DE-1.0 wrapper to DE-2.0 and I’m not sure they will invest much more into EDXL because every time they offer EDXL, clients find it way too complex and too narrowing to build their applications upon, and use either our native S3XML format.”
S3XML, like OData, does not set any limits to the contents at all. Furthermore, there doesn’t seem to be a coherent resource “CAP message“. If there was, even the standard REST controller could export the full message. Nevertheless, it would be possible to define such a resource with some minor tweaks to the existing data model. As is, many Eden modules have the tendency to define incoherent data models, yet models can be changed to provide better interoperability.
Particular Problem
The idea with the one particular project was to use Eden as database for managing resource requests, and to have an already functioning WebEOC solution sending those requests to Eden. The desire was to use EDXL-RM for data exchange. The first software build provided an EDXL-RM interface for it.“A noteworthy issue was that the base data, e.g. requester information (delivery sites, contact information etc) was stored on the Eden side. EDXL did not provide any elements whatsoever to look them up, let alone to maintain them. Nevertheless, that is not an uncommon situation at all: why should the references in EDXL-RM be decentralized? Isn’t it more common that the request management database holds both – request and requester information?”
Realizing the Scope of EDXL-RM
One should not confuse with messaging from information management, where latter is what the project intention were. One should realize the scope of EDXL-RM. As stated in Section 1.3 of the specifications document, it is merely defines 16 separate and specific message types supporting the major communication requirements for allocation of resources across the emergency incident life-cycle. It’s not one size fits all! Moreover, the Resource Messaging goes through three distinct phases of “discovery”, “ordering”, and “deployment”. The level of detail of the reference information would vary in each phase. Therefore, the message broker would need to manage those elements through out the life-cycle involving the resource messaging phases.Is S3XML Versatile?
Relating to an experience, of my own, with an application involving EDXL-CAP and the use of Eden’s native S3XML for generating Common Alerting Protocol (CAP) compliant document outputs, what I realized in EDXL-CAP was it was only producing the set of data for the specific GUI and not the entire CAP message. For example, the segment of a CAP message had it’s own GUI to edit the specific values. Within the segment GUI, the S3XML output would only produce the XML for the Alert segment and not the rest.
S3XML is not GUI-specific, not at all. By default a standard REST controller provides the same output for a non-interactive request as for an interactive one, which is a meaningful behavior, though. However, the Point-Of-Interest (POI) exporter in Eden’s OpenStreetMap (OSM), for example, combines multiple data resources into a single output document, and is still both RESTful and S3XML and “inline-transformable”.
OData provides a generic interface standard to query data repositories of all kinds, including schema introspection. Eden’s S3XML is very similar to OData, although it does not implement aggregation methods, yet. APIs with (semantic) schema introspection capabilities have a much higher potential to facilitate interoperability than emergency management specific data formats; especially with non-emergency-management specific applications. However, that may be an over statement, while such level of achievement is still a bit away. However, Humanitarian eXchange Language (HXL) is a big step into that direction.
EDXL-HAVE, Yet Another Experience
“One of the problems encountered with EDXL-HAVE, namely the standard data structure for Hospital Availability Exchange, is that it assumes the role of an “emergency manager”, a decision maker who controls the resources and is thus in need of the information – and that organisations are ready and open to provide it (i.e. a hub-spoke model).”In the EDXL-HAVE standard, it is assumed that hospitals (or their operators) report to that decision maker(s) on their available capacity, and the decision maker responds to that information, though not by routing patients but by acquiring and deploying additional resources as needed. This doesn’t work where the response decision happens entirely decentralized. In multi-national scenarios, why would India even respond to Pakistan’s HAVE data needs, even if it was ready to assist in an event?
In such cases the information flow is more peer-to-peer and organisations are expected to self-organize. High-level decision making is based on aggregated information whereas details may not be shared at all. Of course, organisations can make pre-event agreements to share HAVE information – but not at an aggregation level of their choice! Using the protocol requires thus both organizational (to actually collect the data at the required level of detail) and political (to share the data) change upfront. It would be better if HAVE (as well as other EDXL standards) had an integrated aggregation pattern, so that you can easily choose the level of detail that fits for the particular case without having to adapt your application.
Does EDXL Require Organizational Change?
EDXL is perceived a domain model requiring organisational change upfront, instead of one that is easy to implement on existing models. The most common answer received from developers that Sahana has interacted with was “EDXL wasn’t designed for the cases I was talking about” To that end, we wonder what the common denominator for EDXL standards is? In most cases it is FEMA and the US way to deal with disasters. Although originating in US, EDXL-CAP is the only initiative that the US nor FEMA was the early adopter. Perhaps one reason to it gaining wider scale global adoption with several Nations and Alerting related Vendors implementing the EDXL-CAP standard.A non-advocate of standards may see EDXL very much as a US-specific thing. In fact, he had no requests so far for any EDXL support outside the US, and especially not at the INGO level (e.g. United Nation Organizations). New, RDF-based (Resource Description Framework) approaches, like HXL seem more promising these days and may outperform all the use-case specific standards in the long run. It is hardly feasible (or even desirable) for most organisations to adapt their data flows to EDXL requirements. Hence, they prefer standards which adapt to their existing resources, like OData or S3XML, rather than the reverse practice.
The philosophy of letting everybody do their own thing and yet be interoperable is simply more adequate than the idea of making everybody do the same in order to be interoperable. However, there is some level of consistency required with the emergency data exchanges. Relating the concept to disparate spoken languages – to foster a harmonized meaningful conversation requires an “interpreter” to manage the conversation between two people speaking in the two different languages. Human beings, to date, are far more intelligent than machines and are capable of processing with incomplete information to take on the role of an effective Interpreter. However, machines are relatively inept to allow them to do their own thing and yet expect them to interoperate without some level of coherence.
Conclusion
Interfaces can be semantically incompatible even when they implement the same syntactic EDXL standard. Still, there are cases of people interpreting EDXL elements differently in different contexts. Some argue that this is due to the lack of rigor in the standard. It may be a consequence of the top-down ontology approach in EDXL.EDXL doesn’t force people to adopt them and say this is the bible you should practice the religion. However, what their intention is to provide a set of elements and a data structure that allows an implementer to think through to ensure their messaging is coherent to a certain extent. I think that’s why most of the elements are set as optional which allows the implementer to build their own policies around them; meaning work flows.
Thursday, December 5, 2013
ICTs in Mitigation presented at ITU COE in Hanoi
Three talks in one day at the ITU Asia-Pacific Centre of Excellence Training on ICT Applications on Mitigating Natural Disaster. The event was hosted by Viettel and held at the Crowne Plaza Hotel, in Hanoi, Vietnam from 28th to the 29th of November, 2014. |
First talk :: The presentation emphasized on a national emergency communication plans.; "considerations for developing a resilient emergency communication system." To that end one needs to: 1. Understand the Natural & Industrial hazard risk profile (e.g. Mongolia) 2. Determine the emergency ICT system: (a) State of the plans, policies, and procedures (b) Clarity of EM stakeholder roles and responsibilities (c) Implementation of multi-agency situational-awareness (d) Gaps in communications and business continuity plans (e) Readiness on all-hazards all-media communication |
A lot of the lessons learned were taken from the LIRNEasia report to UNESCAP. Second talk :: Sahana ecosystem for developing Disaster Mitigation applications - The Sahana ecosystem essentially comprise a community of practice; namely, the group of individuals sharing a common interest in investing their resources towards developing information systems for disaster mitigation, preparedness, response, and recovery phases. The power of the community of practice approach is one of the main reasons for the Philippines community was able to get Sahana community’s assistance to fulfil their humanitarian operations information sharing and publishing needs. Sahana members could be identified as “technology stewards.” - terminology adopted from communities of practice theory. |
Third talk :: A national emergency communication protocol should implement the CAP standard with defining the country profile, register of alerting authorities, and alerting procedures. The presented the Common Alerting Protocol-enabled future trends of disaster warning applications. The all-hazard all-media protocol is quickly expanding into ads and digital signage space. |
Tuesday, November 6, 2012
Digital Story - Empowering Communities with Voice for Crisis Management
Friday, May 11, 2012
How the Sahana CAP Broker can break the Interagency Rivalry
Every where Government agencies are territorial and fear losing their budgets and ability to stand ground. Therefore, choose to work as a silo with less lateral integration. Such structures are ineffective and lead to irresponsible behaviour at the expense of causing havoc on the citizens.
Time and time again we hear of the shortcomings arising from unplanned and ad-hoc procedures carried out in the presence of hazard events. The past experience being the 2012 April 11 Sumatra earthquake. There were no proper procedures to determine the effects of the earthquake. Simply fearing and anticipating the ultimate (i.e. playing safe than sorry), one and only action is to evacuate all 2-3 KM inland. Beware of the consequence of over alerting.
Had their been proper inter-agency communication, not just nationally but regionally, then a simple procedure would be to alert the first responders to man their stations, then monitor the updates from Indonesia or other regional agencies to be informed and be attuned to the situation before executing evacuation plans. If, Indonesia gets hit then execute evacuations; else stand down with an “all clear” message sent to the first-responders. Evacuations are not cheap there’s a cost in it for all, both the public and private sectors.
The, 2011 November 21, Matara Mini-cyclone had agencies bestowed with responsibilities failing to rise to the occasion at the time of need. Then agencies that were unauthorized to issue alerts, but stood up to the moment for the greater good of saving lives, were punished. There’s a simple solution to breaking these silos or rivalry and integrating them for the sake of handling emergencies in a smart and responsible way; and that is by creating a “Register of Alerting Authorities” to decentralize the alerting with policies allowing, not just disaster management but, all agencies holding a stake to act with jurisdiction and hazard specific alerting rights.
Step 1 – Establishing the Register of Alerting Authorities. It is the first step towards developing a Common Alerting Protocol (CAP) country profile, which defines the jurisdictions, who can alert whom for what hazards, so on and so forth.
Step 2 – Agree on and mandate the country CAP-Profile. I was part of the team that developed the CAP-Profile for Sri Lanka and then field tested it in the 2005-2008 HazInfo project for bridging the last-mile. Thereafter, modified to test it in the Biosurveillance work for disseminating health alert.
Step 3 – Adopt a situational-awareness and alerting software tool. Once the CAP profile is established it easy to implement and operationalize the Sahana CAP Broker, which LIRNEasia has been researching, developing, and field testing over the past half a decade. The Sahana CAP Broker was field tested in the HazInfo, Biosurveillance, and recently in voice-enabled alerting to activate Community Emergency Response Team members.
These three steps, especially, the software allows for the integration, decentralization, and monitoring of the alerting responsibilities. A simple procedure, with the use of the Sahana CAP Broker, in relation to the Matara Mini Cyclone incident would be:
Keeping in mind, CAP is the underlying play maker that allows for MASAS to be a success with interagency emergency data exchange in real-time. “NIEM Simplified” is a video that elegantly summarizes the discrepancies around disparate systems prohibiting swift and accurate data interchange between systems and organizations. CAP is the solution to this problem that fosters a National Information Exchange Model (NIEM). However, there are complexities with uncertainties and fear factor of sharing real-time emergency information. The solution is to simplify the problem and “keep it simple with CAP”, Pagatto says.
Time and time again we hear of the shortcomings arising from unplanned and ad-hoc procedures carried out in the presence of hazard events. The past experience being the 2012 April 11 Sumatra earthquake. There were no proper procedures to determine the effects of the earthquake. Simply fearing and anticipating the ultimate (i.e. playing safe than sorry), one and only action is to evacuate all 2-3 KM inland. Beware of the consequence of over alerting.
Had their been proper inter-agency communication, not just nationally but regionally, then a simple procedure would be to alert the first responders to man their stations, then monitor the updates from Indonesia or other regional agencies to be informed and be attuned to the situation before executing evacuation plans. If, Indonesia gets hit then execute evacuations; else stand down with an “all clear” message sent to the first-responders. Evacuations are not cheap there’s a cost in it for all, both the public and private sectors.
The, 2011 November 21, Matara Mini-cyclone had agencies bestowed with responsibilities failing to rise to the occasion at the time of need. Then agencies that were unauthorized to issue alerts, but stood up to the moment for the greater good of saving lives, were punished. There’s a simple solution to breaking these silos or rivalry and integrating them for the sake of handling emergencies in a smart and responsible way; and that is by creating a “Register of Alerting Authorities” to decentralize the alerting with policies allowing, not just disaster management but, all agencies holding a stake to act with jurisdiction and hazard specific alerting rights.
Step 1 – Establishing the Register of Alerting Authorities. It is the first step towards developing a Common Alerting Protocol (CAP) country profile, which defines the jurisdictions, who can alert whom for what hazards, so on and so forth.
Step 2 – Agree on and mandate the country CAP-Profile. I was part of the team that developed the CAP-Profile for Sri Lanka and then field tested it in the 2005-2008 HazInfo project for bridging the last-mile. Thereafter, modified to test it in the Biosurveillance work for disseminating health alert.
Step 3 – Adopt a situational-awareness and alerting software tool. Once the CAP profile is established it easy to implement and operationalize the Sahana CAP Broker, which LIRNEasia has been researching, developing, and field testing over the past half a decade. The Sahana CAP Broker was field tested in the HazInfo, Biosurveillance, and recently in voice-enabled alerting to activate Community Emergency Response Team members.
These three steps, especially, the software allows for the integration, decentralization, and monitoring of the alerting responsibilities. A simple procedure, with the use of the Sahana CAP Broker, in relation to the Matara Mini Cyclone incident would be:
- Meteorological department identifies the potential threat of the Mini Cyclone and posts and issues an alert to which relevant agencies such as the Fisheries Department would subscribe to.
- The Fisheries Department that maintains a contact list of Fishermen in the Matara District send an SMS to the Fishermen.
- The Matara District Disaster Management Center issues a Cell Broadcast to targeting citizens in the Matara District coastal and vulnerable areas.
- The National Disaster Management Center notifies the TV and Radio stations to make the public aware of the threat.
LIRNEasia is in par with developing countries in terms of research and developments, when it comes to emergency communication, especially one that takes in to account of the latest technology developments and procedures. However, LIRNEasia is not proud of rejoicing to a level that the positive findings are nationalized.
Even the Canadians have learned from our research to adopt last-mile warning strategies for their remote Inuit villages as well as adopting CAP recommendations such as defining priority level for response strategies. Despite sharing our knowledge and making it available at the doorstep, Sri Lanka lags in establishing an effective and streamlined warning and alerting procedures. Nevertheless, developed countries, on the contrary, are quick to grab the new ideas and implement them to it’s fullest.Here’s an example -
Multi Agency Situational Awareness System (MASAS) was the highlight of the ISCRAM2012 with Jack Pagatto showcasing their innovation in their efforts to unite emergency coordination and real-time information exchange between agencies in Canada. MASAS is a simple spatial and temporal application that displays all kinds of situations-awareness messages on a map; or “CAP on a MAP” as us CAP adopters call it. The messages can be filtered labelled and shared with any other system or organization. The sharing of information is through simple CAP messaging. The CAP CAN (or CAP Canada) is a well established CAP profile that was advocated through Environmental Canada. MASAS takes advantage of the policies and system efficiencies around the CAP standard and the Canadian CAP profile.
Jack Pagatto began his keynote speech with an example of a case related to a teenager’s unfortunate and preventable death. The thirteen year old boy was suffering from a sever respiratory attack (chronic asthma) and his elder sister, in the absence of their parents, called the paramedics. When the ambulance arrived in the near vicinity of patient’s home the paramedics encountered a stretch of unmotorable flood waters, as a result had to detour, which took an additional 20 minutes to arrive at the scene. By then the boy had passed on. Such a incident could have been prevented if, the ambulatory service was aware of the local flood situation. MASAS is the catalyst for sharing situational reports across all agencies in efforts to prevent similar situations in the future. It works in a way that all agencies with a stake in emergency work have rights and privileges to post alerts at any level.Keeping in mind, CAP is the underlying play maker that allows for MASAS to be a success with interagency emergency data exchange in real-time. “NIEM Simplified” is a video that elegantly summarizes the discrepancies around disparate systems prohibiting swift and accurate data interchange between systems and organizations. CAP is the solution to this problem that fosters a National Information Exchange Model (NIEM). However, there are complexities with uncertainties and fear factor of sharing real-time emergency information. The solution is to simplify the problem and “keep it simple with CAP”, Pagatto says.
Thursday, April 5, 2012
Google Goggles for Incident Reporting
We've been experimenting with voice and text based technologies for situational reporting; more specifically, field observation reports that Community Emergency Response Team (CERT) members share with the incident management hub.
Can Google's Goggles be an instrument to improve the efficiencies and effectiveness for grassroots CERT members in supplying information. We found the voice-enabled technologies are best suited for developing non-English speaking and lesser computer literate countries like Sri Lanka. Emergency responders are familiar and find it easy to use simple voice calls.
Given that Google's Goggles can record a voicemail and take a photo, a possible Sahan interface may be to use such a device to enable rapid incident reporting. Procedure is simple, tell the story about the incident (of field observation), click a photo, then location and time stamp it, press submit.
Can Google's Goggles be an instrument to improve the efficiencies and effectiveness for grassroots CERT members in supplying information. We found the voice-enabled technologies are best suited for developing non-English speaking and lesser computer literate countries like Sri Lanka. Emergency responders are familiar and find it easy to use simple voice calls.
Given that Google's Goggles can record a voicemail and take a photo, a possible Sahan interface may be to use such a device to enable rapid incident reporting. Procedure is simple, tell the story about the incident (of field observation), click a photo, then location and time stamp it, press submit.
Monday, March 19, 2012
Naturally Interactive Voice works for Emergency Communications in Sri Lanka
It's not just Sri Lanka but most developing countries where voice is the predominant mode of communications can be easily adopted for emergency communications. This is my interview with Freedom Fone.
Monday, February 13, 2012
CAP Text not allowed to Speak in USA
The U.S. has banned Emergency Alerting Systems from using Text-To-Speech in broadcasting Common Alerting Protocol generated messages.
Excerpt from the article – Many of those in my community have a hard time understanding the current version of text to speech. In other words, us old folks can’t hear what the computer is saying. There’s also the issue of geographical differences in words. For example, is “soda” and “pop” the same as “soda pop” or “Coke”. If one were to write “I’d like a Coke and fries”, the computer will read that hearer may need more information, ex. “We don’t serve Coke, is Royal Crown Cola OK?”
Here's what I had to say in the LIRNEasia blog relating it to the Freedom Fone and Sahana project.
Wednesday, February 1, 2012
Sahana Google Code-In Students work on CAP Broker
Once again Sahana participated in 2011 Google Code-In. Happy to have been part of it mentoring students. A big thrill was that they worked on the few research tasks that were related to the Common Alerting Protocol (CAP) Broker. The two main tasks were:
The first works were with the HazInfo project, when we tested various wireless technologies for their ability to carry CAP messages to last-mile communities. There was an opportunity to further develop and test it for cyclones and hurricanes but we failed to win the hearts of NSA to nail the grant. There was also interest to build the libraries and test components that would carry CAP messages over Radio Data Systems; however, could not secure any funding to try this as well.
What we did achieve was testing CAP over HF data platform. The first working Sahana CAP broker was tested for health alerting with delivery over HTTP, Email, SMS, and RSS. Then recently, the field testing of CAP messages disseminated through an IVR.
STANDBY ... There's more work to be done and shared.
- develop a blueprint with wireframe to port the Sahana Agasti CAP Broker to Sahana Eden (because Sahana Agasti CAP Broker is no longer supported by the community; moreover, the new version that will be built in to Eden would build on the lessons learned and improve the shortcomings from the piece-wise build original version)
- develop a wireframe to build an XSL Editor (mainly to develop XSL files to transform full CAP messages to deliver through short-text, long-text, and voice-text messages through email, SMS, IVR, Twitter, etc)
The first works were with the HazInfo project, when we tested various wireless technologies for their ability to carry CAP messages to last-mile communities. There was an opportunity to further develop and test it for cyclones and hurricanes but we failed to win the hearts of NSA to nail the grant. There was also interest to build the libraries and test components that would carry CAP messages over Radio Data Systems; however, could not secure any funding to try this as well.
What we did achieve was testing CAP over HF data platform. The first working Sahana CAP broker was tested for health alerting with delivery over HTTP, Email, SMS, and RSS. Then recently, the field testing of CAP messages disseminated through an IVR.
STANDBY ... There's more work to be done and shared.
Friday, December 16, 2011
A nifty way to test Speech-To-Text uncertainties with ITU's Difficulty Percentage measure
In these experiments the LIRNEasia researchers used Freedom Fone Interactive Voice Response (IVR) system. First they conducted a survey with known values for the subjects to pick from. These answers were submitted through the IVR. Since the values were known to the human quality testers, this part of the experiment was associated with a speech-to-text trained system (or a speaker-dependent system or voice recognition type system). The second part involved the subjects submitting data that was not based on preset values. They were free to submit answers to questions as they pleased. This was regarded as an untrained or speaker-independent system.
Emulating Speech-To-Text Reliability with ITU Difficulty Scores
"The results show that with a speaker dependent system 95% of the information could be clearly deciphered opposed a speaker independent system that was only 70% clear (blue areas in Figure 1 and Figure 2). It is not surprising, the outcomes are intuitive. In our study reliability had two components, one was efficiency and the other was voice quality. The voice quality also took in to consideration the Mean Opinion Score and the Comparison Categorical Rating. The researchers wish to acknowledge that their may be disagreements in the sample sizes and number of Evaluators. These results are not ideal for drawing a ‘for-all” kind of conclusion. However, at this realize stage of the research it provides a quick and easy method to draw initial conclusions." ...Click to read full article
Subscribe to:
Posts (Atom)