Posts

When Hurricane Katrina hit the American coast in 2005 Facebook was a newcomer to a still-to-be-developed world wide web, there was no Twitter to have news updates and less than 70% of citizens owned a mobile phone. Today, with more portable devices than citizens and an ever-constant interaction through social networks, the way we obtain and share information during crisis has drastically improved. This is proving very helpful in recent crisis like the 2013 super typhoon Haiyan in the Philippines where Twitter was the single greatest information source for response and recovery efforts.

Social media is becoming essential for authorities to access vital information provided by citizens that would not be available otherwise, which improves the prevention and response to critical events. However, social network information is largely unstructured arising from the fact that everyone can be an information source. From eyewitnesses to emergency responders or NGOs, that can provide information from the ground, to mass media that amplifies the message, or even outsiders showing sympathy and emotional support. In this context, there are many factors that affect how the information flows, such as the use of hashtags which is very diverse and can sometimes hamper the identification of relevant data. Thus, it is necessary to analyse social media to place the pieces of the puzzle together.

The extraction and analysis of social media information is an important part within the I-REACT project. This information obtained from citizens will complement data coming from earth observations, UAVs, or emergency responders, among others, to provide real time data on floods, wildfires, earthquakes and other natural disasters. For this, Natural Language Processing (NLP) technologies developed by the I-REACT partner CELI, are being used to analyse big data streams from social media.

To do this, great amounts of information are initially collected from social networks by using searches on generic keywords such as “earthquake” or “flood”. Although this information will be unstructured, all or most of the emergency-related material will be gathered this way. Since this data can be compared to that of past events and to “regular” behaviours on social networks, a vital information will be generated: detecting if something unexpected is going on and spotting the occurrence of an emergency in real time.

This information will then be validated through linguistic analysis and machine learning techniques. Here, it is possible to select the emergency-related contents and identify useful information such as the type and location of event, the casualties, or the damage to infrastructures and services. In addition, we can also have information about the sentiment of the message, which is important to create panic maps and to prioritise actions on the ground. And once the event is concluded, the system keeps collecting data so that it can be continuously tested in spotting new emergencies from social media. This way, this tool will progressively learn and refine its ability to identify disasters.

Overall, social media analysis provides fast and relevant information during emergencies, highlighting the fact that these communication channels are not only changing the way we live and interact with each other, but also making every citizen an essential part in the fight against disasters.