Earlier this August we (Brendan Heberton and myself) had the chance to visit Camp Roberts to participate in John Crowley & Co’s humanitarian relief experiments. On this trip we wanted to begin empirical testing of how useful social media stream like Twitter are during disasters, and what the potential was for streaming analysis. This could be a massive topic and experiment, so we boiled it down to a few basic tests. Catherine Starbird at Tweak-the-Tweet/UC Boulder was kind enough to donate the Tweets from this summers Colorado wildfires. This gave us a nice pull of about 250,000 Tweets to work with.

Off the bat we got a fresh reminder of how few people include a GPS location for their Tweets – only about 2,500 in the case of the wildfires. Unfortunately the Twitter place ids were not included in the data, which will give you a city to neighborhood level of accuracy for 15-30% of Tweets roughly. It was still a very interesting sample of data and made for some useful analysis. Since we did a variety experiments with the data that would get overly long and boring I boiled it down to a list of challenges and potential best practices we discovered for working with the data.

CHALLENGES

  • The volume of data with precise geographic coordinates tends to be quite low as a percentage of the over al data (1-2%).
    • This allows for the tactical extraction of specific Tweets, but makes for bad samples or indicators of the over all population.
  • In addition to the demographic bias of Twitter users, there is also a bias in the volume of Tweets coming from individual users.
    • Current analysis techniques treat all Tweet equally whether it is a single Tweet from a user about the disaster or a user who Tweets 120 times in a day.
    • Aggregate analysis and correlation can be unduly influenced by deviations in the volume of Tweets from individual users.
    • Specifically we called this the Racerboi8 problem who was a user in the wildfire data who Tweeted 10x more than the next closest user and was far removed from the threat area of the wildfires.
    • Without normalizing the Tweet volume per users this can create false positives in the data.
  • The need to have a specific keyword to identify disaster related Tweets means social media can’t be used until the disaster is well underway and the community has established keywords or hashtags.
    • Tweak-the-Tweet does a good job of curating data with the community but requires a taxonomy emerging from the community before it can start generating data.
    • This causes the loss of the capacity to tap Twitter as an early warning and indicator of an emerging disaster.

The Raceboi8 problem was a particulary thorny issue and I think a picture is worth a thousand words in this case:

That is one tenacious and pacing Tweeter!

BEST PRACTICES

While there are a good number of challenges with using social media streams like Twitter, I think there is also huge opportunity.  So, we wanted to share some of the approaches we took to tackling the problems we’ve encountered using social media during disasters.

  • Filter Tweets by proximity to the disaster through intersection or buffer to remove Tweets that are reacting to news or outside observers, but not actually involved in the disaster.
  • For aggregate statistics and analysis normalize Tweets by the volume of activity by users to remove sample bias.
  • When doing macroscopic analysis like detection of patterns use the entire Twitter dataset for calculations then infer geography after the analysis has been run to do mapping and geoprocessing.
  • Use pattern detection techniques to identify emergent hashtags and keywords that can be used to kick of targeted searches for tactical use.
  • Use spatial/temporal regressions to identify social media voids by identifying geographies with more or less activity than the population would predict.  Entropy could be another good indicator if users wanted to make this a dynamically updating analysis.

A few visual examples are always nice to put these techniques in context:

In the analysis we filtered Tweet by proximity to wildfires in Colorado Springs and their proximity to critical infrastructure.  The gridded thematic is showing night time population courtesy of the Landscan project at ORNL.

To detect hot spots and voids we ran a simple linear correlation between the population in each grid cell (independent) and the number of Tweets (dependent).  The resulting map highlights areas with more Twitter activity than the population would predict.  The dark green square in the lower left hand corner ended up being the location of the local TV station that was pushing out a deluge of Tweets.  This and the Raceboi8 example drove home the need to normalize the data by Tweet volume for statistical analysis.

Last we hooked the data up to Twitch and did some streaming analysis to see what patterns we saw emerging in the data aggregated over time.  Generally Twitch is better for real time events but was still useful for some post event pattern analysis.

In the course of the experiment we had some great conversations and help from Nat Wolpert and Katie Baucom of NGA’s disaster response team, Chris Vaughn and Michael Gresalfi of FEMA, and data help from Chris Mayfield of NORTHCOM.  Their ideas and feedback have given us new set of motivations for furthering developing the ideas here.  Hope to have more to show to that end soon.  Thanks to Crowley & Co. for hosting us at another great event and thanks to Brendan Heberton for the heavy lifting!

 

2 Responses to Testing Social Media Viability for Disasters at Camp Roberts

  1. [...] matter. Since many tweets do not include geographic information, colleagues at GeoIQ are seeking to infer geographic information after analyzing a given stream of tweets, for [...]