Online Dangerous Speech Monitoring in Kenya: Umati Project's Findings from January - November 2013.

By Nanjira Sambuli
iHub Research
  Published 27 Jun 2014
Share this Article
2013 was a rather eventful year in Kenya, with the elections (conducted under a new constitutional dispensation) and the subsequent settling in into new governance structures(devolution), the ICC cases against the President, Deputy President and a radio journalist on their alleged roles in the 2007/8 Post-Election Violence. Alongside these political happenings, the rising cases of terror-related attacks, culminating in the Nairobi Westgate Mall Attack on September 21st 2013 have all been triggers for reactions among Kenyans, some of which have been expressed online. These and other events formed the backdrop of our online dangerous speech monitoring, with this report highlighting findings from the January to November 2013 monitoring period.


In order to understand the changes in online inflammatory speech used over time, the Umati project developed a contextualized methodology for identifying, collecting, and categorizing inflammatory speech in the Kenyan online space. To categorize hate speech, the Umati project uses Susan Benesch’s definition of dangerous speech, that is, speech that has the potential to catalyse collective violence. The key variables of the five-part Benesch framework uses a speaker’s influence, audience receptiveness, speech content being understood as a call to action, the social and historical context of the speech and the medium of dissemination. The framework enabled the Umati project to develop a methodology for the collection and analysis of online hate speech. We developed the categorization spectrum of offensive speech, moderately dangerous or extremely dangerous speech especially based on the perceived speaker’s level of influence and the content as perceived to be a call to action. The project’s key findings in 2013 were:
  1. Dangerous speech captured was predominantly based on ethnicity and religious affiliation, and much online hate speech comes in reaction to events that transpire or are witnessed offline.
  2. Online hate speech disseminators largely identify themselves with a real or fake name and use languages widely understood in Kenya (English, Swahili, and Sheng).
  3. Over 90% of all online inflammatory speech captured by Umati was on Facebook, making it the highest source of such content. Though Twitter is increasingly widely used in Kenya, we noted that hate speech in the Kenyan twittersphere had been subjected to “KoT (Kenyans on Twitter) cuffing, where tweets considered unacceptable by the status quo were openly shunned, and the author of the tweets, publicly ridiculed. (This self-regulation mechanism has gained popularity and is now being noted on Facebook as well).


Running from September 2012 to date, the Umati project has created the largest database of hate speech incidents in any one country (over 7,000 incidents). The project is now in its second phase, automating, where applicable, the online monitoring process in order to enable the methodology to be more replicable locally and in other countries. Though instances of online hate speech catalysing events offline have not yet been well-established, we believe that the project’s findings offer a window of insight into the state of Kenyan society. From this we conclude that the root causes of hate speech—both online and offline—should be investigated and addressed. Monitoring, in and of itself, is not a complete solution.    
comments powered by Disqus