Speaking to people is the easy part. Getting the mobile devices, laptop, digital recorder, camera, and 3G modem to cooperate-- that's the circus of modern fieldwork.
Everyone, research participants that is, seems delighted to take a short break from their workday and find out what I'm up to.... balancing my laptop with a Nollywood-style video playing, repositioning the set-up on a plastic chair to find the sweet spot for the 3G modem, trying to nonchalantly hold the digital voice recorder so that participants forget I'm recording (which means I sometimes forget myself and cover the mic with my hand), and pausing from pen and paper note-taking to pass a mobile phone where they fill out a brief survey about the video they've watched. It's the first time many of them have used a touch screen, so I crouch and lean over their shoulder squinting in the lunchtime sun. (Here at the equator, the optimal time to use any device outside.) I am conducting an experiment, which I'm told is very risky for a thesis methodology in the social sciences. With five devices in this scheme, the chances of suffering 'technical difficulties' are terribly high. Not to mention the chances of me just dropping one of them in the dirt. The experiment is simple; the hardware is the challenge.
The who
the what
the how of it all.
Ideally, my participants are individuals who are engaged in their
community in such a way that they might want to gather information or
make reports to address a problem, take action, or make a policy. The kind of people who might use these mobile applications in the future. This
could include ngo staff, social workers, local government staff,
community development project members, and ICT students. In addition,
they are bi-lingual in English and Acholi (although they may speak other
languages), but they have not spent more than 6 months outside of Uganda
which may contribute to acculturation. Age was not a primary consideration, but an effort was made to speak with an equal number of men and women.
It's quick and painless.
I show a one-minute video, then ask people to tell me what they've seen. First, in Acholi, then through a series of questions on a mobile device (also in Acholi), and finally a third time in English. I chose the video to approximate a scene of conflict because I am interested in how software applications can ultimately be adapted as a tool in conflict resolution. The video is not particularly violent, but it shows a scuffle on a crowded street involving many people. It is unclear to the viewer why the fight started, and this ambiguity means s/he must draw on schema, past experience, intuition to understand what has transpired. (This connects to my previous post about narrative structure and cultural variations in interpreting narratives surrounding events.)
I will be comparing the three versions using cognitive linguistics-- a combination of looking at thought and language focusing on how we decide (although perhaps not consciously) to articulate concepts when we have access to multiple languages in our minds. I am looking for English concepts that appear in the Acholi versions-- especially the one produced with the mobile device. This would be evidence of conceptual transfer and transfer can happen in either direction. At the level before language is produced, even when the participant is intending to use Acholi, they might be engaging with English concepts. My hypothesis is that the mobile technology triggers this engagement. Changing the interface language to Acholi isn't enough, participants will convey an anglicized narrative when using this ICT. By doing an experiment I can measure the instances of transfer. Of course there might not be any or they could occur in a manner I don't expect. That's the fun and the risk of doing an experiment.
It is only an initial step to describe in quantitative terms the limitations of current technology to capture narratives in languages that are extremely different from the languages that the software was designed for. And also point towards avenues for addressing these limitations. That is the next step, to contemplate what it could look like... a step for software engineers here to consider.
Why does this matter?
There is already research that looks at how we remember events differently in different languages. We connect these memories to sensory and emotional information through language. If we are forced to recall events in another language, the narrative we give may be different. If you stack up enough of those altered narratives as reports about a conflict, a crime, a human rights abuse, the final picture will compound these distortions. What would happen if the narrative could be collected in another way? A method that does not yet exist, but that reflects not only the language of the individuals sharing narratives, but their notion of what constitutes information worth collecting, and still further, how to piece that information together, how to organize it. An alternative to the current logic governing ICTs.
Beyond uses for conflict management, indigenous software that boosted regional or domestic use would be economically significant in places like Uganda. If it organized information in a way that was incompatible to current software, i.e., the new couldn't immediately talk to the old, then that creates a market for still more software to facilitate interoperability when it needed to happen. And purposeful inaccessibility could make systems more secure.
But this is all a long way off. This is the potential, the reason I find the topic so interesting. My immediate concern is contextualizing results such as when an individual with a degree in computer science and a job in crowdsoucred ICT work prefers to have me type the text on the mobile device or to speak their answers rather than write them in the provided textbox. Is user-repellent the opposite of user-friendly?