Ushahidi and other crowdsourced projects have shown just how useful it is to enable ordinary people to use the tools that they already use everyday to report what is happening around them. But the success of these tools can be overwhelming if we don’t have a way to weed out irrelevant, spammy or fictitious reports from the relevant ones. As repressive governments, spammers and others learn more about the technology and its weaknesses, and as the streams of information around a particular issue become gushing rivers, we’ll need strategies to keep social software like Ushahidi useful and relevant while keeping it easy and accessible for ordinary people to use.
Inspired by Patrick Meier's post: "How to Verify Social Media Content: Some Tips and Tricks on Information Forensics", I thought it would be a good idea to interview Ushahidi-users about specific cases of misinformation and how they dealt with “dirty data” problems. Last week I interviewed Fareed Zein who led Sudan VoteMonitor project during Sudan’s first multiparty elections in 26 years (read more about the project here). Although based in the US, the Sudan Institute for Research and Policy (SIRP) relied on a host of civil society groups who they trained to report on the elections on the ground in Sudan. Fareed said that they used Ushahidi as a way of getting the word out about the progress of the elections because, unlike other media, the Web was less restricted and enabled reporting by ordinary people.
“The government had control over the media and they were in a position to do whatever they wanted. We wanted to let the world know what was going on.”Fareed said that as reports indicating harassment, intimidation and closed poll stations came in, civil society monitors on the ground verified reports by calling either the person who sent the report, relevant election centers or other monitors and then posted these to the ‘verified’ category in Ushahidi. “On the second day our local partners started to see reports that were pro-government… There was a fairly sophisticated operation in the national security operation in Sudan. They have a technology division - they control all the ISPs - all of them are subject to censorship by the government. The national security agency gets to monitor traffic.” Fareed said that a report would come in saying something like:
“Elections in such and such location is very orderly and citizens are voting overwhelmingly for President Bashir.”When I asked him how they knew these reports were false, he said that firstly, the report was suspicious because nobody would (or should) know who people had voted for and that secondly, these reports stood out from all the other reports coming in and that it made sense that government supporters were trying to divert attention away from cases of violence and intimidation in the face of international pressure.
“It was clear that someone was intentionally trying to color the perception. The regime in Khartoum was very worried about international perception. And they were worried about legitimacy.”The site was shut down after the second day, says Fareed.
“They decided they didn't like what was being reported and we had to go through a lot of intervention to try and bring it back online. They were clearly watching the site.”A few things stand out about this example in terms of countering misinformation: 1. The availability of personnel to verify reports on the ground (both physically and culturally) – physically because people who are close to the source know more because they are more likely to have witnessed something, culturally because people from the area are more likely to be able to understand clues in the text and the motivations of stakeholders. 2. A strategy up-front for dealing with false reports. Fareed said that the team had expected misinformation based on the political climate during the elections which meant that they had a strategy in place to deal with this working with experienced local civil societies. 3. Making unverified reports available. Rather than deleting unverified reports, Ushahidi enabled the VoteMonitor team to indicate those reports that were verified, allowing citizens to judge their validity themselves. Has your project encountered misinformation and how have you dealt with it? I'm helping Patrick Meier compile a series of stories about this. Please email either me or comment on the story below.