Monday, 20 October 2014

Think Outside the Lab


This takes me back full circle to the initial post with which I launched this blog—my communicative purpose and challenge was to work within an impossible sort of Venn Diagram of three disciplines that never seem to collaborate: Computer science (such as developers, information scientists, and engineers), linguistics (including cognitive linguistics, but really all language-oriented studies), and finally political science (policy-oriented and often practitioners such as human rights activists or crisis response managers).  Pairs of these disciplines can be found teaming up, but an effort combining the insights of all three is, sadly, very rare indeed. 

A recent project out of MIT from Berzak, Reichart, and Katz pursued the hypothesis that structural features of a speaker's first language will transfer into written English as a Second Language (ESL) and can be used to predict the first language of the speaker.  I believe their work does not go far enough in two respects.  First, in terms of sampling.  Second, in terms of considering how parsing communication in this manner might be applied to software design.

Their paper addresses the sampling problem as one of resources.  They acknowledge that there are over 7000 languages, and there exists a written corpus (their data pool) from only a relative few.  My critique, however, is that they consider all languages members of the same sample set for their experiment.  Katz explains in an interview that he was drawn to investigate and algorithmically describe ‘mistakes’ made by Russian speakers in English.  These mistakes are called linguistic transfer because an element from the first language is transferred into the second.  (Reverse transfer can happen as well when a new language affects the first language.)  Linguistic transfer can come in several forms: phonological/orthographic (mistakes due to sound or spelling), lexical/semantic ('false friends'), morphological/syntactic (grammar mistakes), sociological/discursive (such as appropriateness or formality), and conceptual (categories, inferences, event elements, concepts generally).  If Katz’s group had differentiated between types of mistakes, they might have improved the rate of prediction success in their results. Also, it is unclear if their model was able to incorporate more complex types of transfer such as discursive or conceptual.  One reason they may not have seen the need to differentiate by type was their limited sample. 

Most linguistics studies (or research asserting multi-lingual or multi-cultural value) that purport to incorporate a broad range of languages, in fact, only draw from a small number of closely related languages that don’t possess particularly profound differences in conceptual organization of information. That means if there were instances of conceptual transfer, they would be rare or at least difficult to detect.  (Most studies look at Indo-European languages, plus perhaps Russian, Hebrew, Korean, or Japanese to appear to have real diversity.)  Among the nearly 7000 languages, there are only 100 or so that have a literature; it is this group of languages that are most frequently studied. These languages are, therefore, ones which have a strong history and preference for writing (called chirographic), and this mode of communication has had an effect on many cognitive processes within the populations that speak these languages.  The rest of the 7000 are predominantly oral, and there are very rarely oral languages represented in the sample sets (of any study).  Orality is not be confused with literacy; it is a preference for communication and most speakers of predominantly oral languages also speak and operate in chirographic languages as well.  The impact on cognitive processes such as categorization, problem solving, ordering for memory, imagination, memory recall, etc, is connected to a need to rely on sound and associated mneumonics for information organization.  If you cannot write something down, this changes your strategy for remembering something or for working through a problem or any number of other cognitive processes.  The linguistics studies that fail to represent a member from this set of predominantly oral languages make an egregious sampling error which leads to false conclusions about universal or easily modeled qualities of communication.  Orality is a profound variable in terms of its effect on cognitive processes. That is why investigating and describing communication at a conceptual level and drawing from languages much more distant to the typical baseline of English would yield some surprising results.

The second problem with the MIT study is one of anticipating a use for the findings.  Quoting from the press release about their work:
"These [linguistic] features that our system is learning are of course, on one hand, of nice theoretical interest for linguists,” says Boris Katz, a principal research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory and one of the leaders of the new work. “But on the other, they’re beginning to be used more and more often in applications. Everybody’s very interested in building computational tools for world languages, but in order to build them, you need these features. So we may be able to do much more than just learn linguistic features. … These features could be extremely valuable for creating better parsers, better speech-recognizers, better natural-language translators, and so forth." (L. Hardesty for MIT news office 2014)

Yes, so true.  However, it's not theoretical at all, nor is it simply the folly of linguists to pursue communication variation at a conceptual level.  Using conceptual frames (Minsky, 1974) has already been proven to be an effective method in improving the search capability in map tools by Chengyang et al. (2009)  who shifted a map search tool to operate from conceptual frames rather than conventional English search terms.

Katz and his lab at MIT are credited with the work the led to Siri, and this new study could be applied to machine language tools so that patterns of mistakes become predictable and thus correctable.  It could also be added to text scanning tools to detect the first language of non-native English authors on the web thus adding to the mass surveillance toolkit.

I think a lot more could be done (but hasn't) with this methodology in terms of looking at how an oral language's conceptual frames could be described and then used to calibrate a more responsive information and communication application.  I used very similar methodology to the MIT researchers in my experiment looking at reverse linguistic transfer last year (that I have been charting on this blog).  I compared sets of bilingual narratives and looked for patterns of 'mistakes,' but I was interested in what these mistakes could tell us about the communication needs of the users (mobile technology users in rapidly growing markets like Africa, South East Asia, or South America).  My hypothesis was that the structures of their first language were being  distorted, being converted into 'mistakes,' in order to fit a prescribed (foreign) conceptual structure of the software application.  What I found was much more complex than counting instances of mistakes.

What I observed, and quantified, was that when comparing the first language oral narrative to the first language narrative via mobile report (either as an SMS or as a smart phone app question series),  3/4 of the participants expressed dramatically different narratives when using the mobile report format than in their initial first language oral narratives.  That means that translating interfaces isn't sufficient to provide communication access.  There are underlying conceptual aspects to communication that have yet to be addressed and that are inherently cultural (currently mono-cultural).  Due to the complex nature of concepts such as justice, personhood, time, or place identifying and isolating instances of transfer was very challenging.  A summary of the results is forthcoming in 2 papers as well as my doctoral research, but the main conclusion I researched was conceptual-level parsing of communication should be integrated into design of communication and information management software with the integration of insights from oral languages.  Inclusion of this variable with indigenous software design will increase the ability of users from rapidly growing markets to participate with and leverage the information and communication technology in a manner which meets their needs.

this topic will be continued with highlights from forthcoming publications.

references:
Chengyang, Z., Yan, H., Rada, M. and Hector, C., 2009. A Natural Language Interface for Crime-Related Spatial Queries. In: Proceedings of IEEE Intelligence and Security Informatics, Dallas, TX, 2009. 

Jarvis, S. and Crossley, S. 2012. Approaching language transfer through text classification: Explorations in the detection-based approach. Multilingual Matters, volume 64.

Jarvis, S. and Pavlenko, A. 2007. Crosslinguistic Influence in Language and Cognition. London; New York: Routledge.

Tuesday, 30 September 2014

The Post-Terror Generation

When did generations stop being defined in terms of war?  In terms of the hardship that fueled dreams of a brighter future for the generation to follow?  The name of an age once served as a reference and reminder to ill-fated political policies, to the suffering wrought by our own hubris so that we might shield future generations from the mistakes that have cost us our chance.  When we examined a generation, their defining characteristic has most often been forged by war.

The Lost Generation who witnessed the hardship of WWI and the political forces that disappeared the golden age into the modern age.  The Victorian Age before it, synonymous with colonialist expansion and industrialization.  Generation X was the first generation to be defined without direct reference to a defining war or hardship because theirs was a generation without a cause, without fight or direction.  It was defined by its 'anti' characteristics, most notably apathy, in contrast to its parent generation, the post-WWII Baby Boomers.

There is now a post-terror generation.  Children born near the end of the 20th century and certainly after 9/11 who have never known a world that was not caught up in the War on Terror, who have not been consumed at every cultural level by a political policy structure reacting to the War on Terror.

The post-terror generation was the term used by Edward Snowden in an interview with Vanity Fair in April 2014  to describe the millennials, a group that polling research has noticed is a more optimistic, less polarized generation.  Trying not to repeat the behaviors they've witnessed in the generation that preceded them such as the US engaged in a multi-front war, a contentious, non-stop screaming match between political parties, and more and more leaders that find it easier to blow up a problem rather than reach out and build a solution (Generation Terror by Michael Scarfo). The post-terror generation will look past the bombastic pressures that rile or depress the rest of us as simply white noise. 

The War on Terror is a black hole of policies that have pulled us into war and obscured our focus on the environment, the banking collapse, poverty, education, healthcare..... creating an paralleled atmosphere of partisan rancor.  The post-terror generation had no other course but to rebel against this failed model.  Ironically, because they have always had the looming threat of terror, the fear does not govern their lives in the way the previous generation felt a tectonic shift had robbed them of an inalienable security.   

The evolution of current security policy is certainly more complex than this post delves into; here, I am interested in pondering how this young generation coming up behind me sees the world, and why we don't have cooler names for generations!  The alphabet system has run its course.  I vote for a return to connecting ourselves to a defining struggle.  The post-terror generation suits these kids.

Wednesday, 20 August 2014

The Slow Drip Invasion: use of ICT and UAV in weak states

If you imagine the challenges faced by local communities plagued by conflict and institutional instability, someplace for example like Somalia, a nation that has faced profound governance issues (sorry to pick on Somalia, but this post is about weak states), and you work in any part of the humanitarian or development organization network, it may seem perfectly reasonable to empower local communities and civil society groups to collaborate on Alternative Modes of Governance.  In this way, communities can see to their own basic needs.  Mobile technology has emerged as a resource that development experts are thinking creatively about in order tackle these types of issues.  The rapid influx of phones, more generally of information and communication technology (ICT) has presented new opportunities for governance according several influential tech architects.  In a new book, Bits and Atoms: Information and Communication Technology in Areas of Limited Statehood edited by Steven Livingston and Gregor Walter-Drop, contributing authors such as Patrick Meier the developer of Ushahidi (an SMS platform relied on by branches of the UN and US government) as well as development heavyweights such as Dr. Sharath Srinivasan, Director of the Centre of Governance and Human Rights (CGHR) at the University of Cambridge, explore ICT's viability as an Alternative Governance Modality.  They discuss the effects of ICT proliferation within the slums of Kenya and Russia as well as other areas that are considered to have limited or weak central governments.  

Alternative Governance Modality.  It sounds like a very neat solution.  Certainly seems like the best option in the slums of Kibera, Kenya.  If you break it down, it becomes decidedly less neat.  Alternative to what exactly?  If local community groups and NGO/civil society partnerships have been capacitated with ICT, where did that technology come from?  Where did the policy directives come from?  Is there a Dutch NGO or USAID project that has essentially invaded a small corner of some weak state that is unable to object, all via mobile device?  And where does the data go?  Who is it for?   
I am not advocating that communities should not organize or utilize whatever means they find in order to address the issues they face; however, the authors are not being entirely honest in their assessment of 'locally empowering' when they describe the use of ICT in these projects.  The tech tools employed are inherently external and foreign objects.  Indigenous ICT simply does not exist yet, and the means to develop it might, to incorporate the cultural nuance of information and communication preferences from Kibera or a Somali community, but these design techniques are not being used in the humanitarian ICT field.  The focus is mistakenly on simplicity, assuming that streamlining applications will overcome literacy issues or even culture barriers.  This approach compounds the problem.  What Western designers understand as the most logical, the most simple, the most intuitive, inherently expresses their conceptualization of how information should be organized and how it connects.  The ways in which information can be connected and organized at a conceptual level is by no means universal, particularly if you take into account differences between more predominately oral cultures.  (Check out some earlier posts on orality; it’s not the opposite of literacy, but a cultural communication preference with cognitive implications.) By concentrating or streamlining the design, it becomes extra-Western in its conceptualization and thus even more distorting to non-Western information and communication intentions.  The current interface and information design empowers Western users not the users described in these ‘weak state’ contexts.  By capacitating local groups with ICT, the authors are describing a situation for linking up new populations to a vast data network.  Connecting them as potential sources of information and points of leverage (perhaps for Western policy makers, perhaps for commercial enterprise).  ICT which captures information in ways useful to non-Western users, ICT that functions as a tool, a potential policy-making aid or technological advancement outside the Western concept has not been fully realized; therefore, what does exist is only useful or empowering to the group that designed it, that released it in the field, that is writing about its vast potential because that potential will be for them.  

The problem of data collection in weak states is brought into relief with the use of UAVs (drones). 
 The images they provide are meant to enable humanitarian crisis responders to more efficiently get know ‘the lay of the land’ when called to work.  Who could argue with technology that improves humanitarian missions and potentially saves lives?  This was the original purpose of ICTs like Ushahidi-- crisis response.  There is a jump in the script, a missing (or a few missing) steps that take us from designing a successful tool for crisis response in which information is marvelously organized and communications are streamlined during a short-term intensive mission by external actors to a stage where this technology is being used to govern long-term by indigenous populations.  These are two massively different tasks not to mention two different sets of users.  How did these parameters escape the designers?  How did we just slide into alternative governance modality from what was initially a cobbled together system to organize the hectic atmosphere of crisis response?  How did these projects go from responding to crisis (after the fact) to inserting into the fabric, the airspace, of weak states in an on-going capacity with the stated aim of preparedness?  The reason mapping projects seem to empower the technology providers more than the local population is that, for example, in a region where I recently did fieldwork in Northern Uganda the language spoken there has no word for map. This was true for the surrounding languages ranging into Ethiopia, Kenya, and South Sudan.  (read more about about it in Without a Map, and Like It's 1899).   The data collected with UAVs will arguably improve humanitarian missions.  But what else besides?  

There is something in scientific research called dual-use technology and for which certain safety protocols are developed.  A scientist may discover an amazing virus that can be harnessed to cure cancer, but it could also be released as a weapon—there are two uses.  So far, ICT applications and other digital technology are not treated as creations with this same bipolar potency.  There is little ethical debate about the long-term implications or context of use.  There is certainly no ethical training for designers or engineers.  I have been pleasantly surprised by a few technology journal editors that encourage ethically driven arguments, but I think it would be terrific if there were more voices in the field taking the idea of 'empowerment,' a bit further, that is to say really delving into how this power comes about and from where/to whom.   To put it another way, these tools will only become empowering for more people, become better tools if designers are driven to improve them—this is an area where there is huge opportunity to develop new tools for new groups of users (staggering large groups of new users) that approach local governance or any number of issues from a non-Western conceptualization.  A total departure from the humanitarian crisis responder-user and task and an embrace of the indigenous user and his/her information and communication preferences should lead to a much more successful tool.  This culturally based ICT development is certainly on the horizon.  I can't wait to see (or hear) it in action.

Friday, 25 July 2014

Dark Cloud over Academic Freedom

The US supreme court's ruling upholding the subpoena issued on the basis of the Mutual Legal Assistance Treaty (MLAT) between the US and the UK makes researchers vulnerable to the same reprisals and targeting as informants and spies.  The work of researchers, the interviews they collect, the analysis they provide, can be invaluable in conflicts and this ruling changes them from a resource for peace to a tool for destruction.

What was the case that tipped the balance?  A murder in Belfast over 30 years ago allegedly committed by Gerry Adams.  (Prof. Robert White of Indiana University's sociology dept gives a nice timeline of The TroublesThe News Letter, The Pride of Northern Ireland gives a timeline of the events of the case; and Boston College gives a timeline of legal proceedings)  The reason a murder case in Northern Ireland touches reseach in the US is the MLATreaty... there were documents collected by researchers at Boston College that were subpoenaed as evidence, and similar the conundrum courts face with journalists when they know details of a crime, the court in Northern Ireland felt there was important evidence in the interviews that could be shared via this treaty.  The Belfast Project documented many hours of interviews with participants on both sides of the conflict on the strict written understanding of confidentiality until their death unless otherwise granted.  This kind of precaution was and still is felt to be necessary to protect interviewee's lives and those of their families.  In fact, this case is about a kidnapping and murder of a woman by the IRA.

Now there is ample meat on this bone of contention for legal and social science scholars.  First, should we consider the researchers at Boston College researchers or were they journalists or even IRA affliated persons (which gave them trust to interview those communities) with no academic credentials?  Does their categorization even matter when the real issue is the breach of confidentiality of their sources?  The breach of trust, a foundation in collecting interview-based research.

Another issue, not emphasized during any of the court hearings, was that after 1972, anyone arrested and imprisoned in Northern Ireland was considered a participant in the conflict rather than a criminal.  There was a conceptual and legal change in standing for crimes committed thereafter as being part of a larger battle.  As I understand it (from speaking to experts in this area), if this murder charge had been brought in 1972 and Adams had been arrested then, it would not have been a murder charge but rather a political one (called Special Catergory Status).   And this is a key distinction both during and post-conflict for rebuilding because, to take a different example like Egypt after the revolution in the streets in 2011, would it be helpful to go back and prosecute every person they could find for breaking a window for vandalism and every person for assault and battery for protest related violence?  (Certainly some post-conflict resolutions chose to pursue justice for key leaders such as through the ICC, but this is not always the case.)  At some point, most post-conflict societies decide to draw a line of forgiveness (such as truth and reconcilation) in order to move forward.  And in no way is the forgive/move on method easy, it is simply that there is something unusual about a murder case in these circumstances.

The researchers ceded their interview data to the Boston College Library where anonymized data could by used by other researchers.  The data was held by a third party not unlike how we use cloud data storage or email or other digital storage resources to facilitate data collection and security.  Ultimately, it became the university's decision to comply with the subpoena not the researchers themselves because they had given up the data.  How we store our data, who controlls it, who has access to it is ever more important with this implications of this ruling.

As argued in the Massachusetts ACLU's amicus brief, described here by their executive director Carol Rose, “It is alarming that the trial court opinion suggests that the Constitution surrenders US citizens to foreign powers with fewer safeguards than are afforded to citizens subpoenaed by domestic law enforcement agencies.  If the government has its way, it would straightjacket judicial review of investigations and prosecutions by any foreign country party to this treaty, including Russia and China.”

The examples given in the amicus brief illustrate how information sharing (or not sharing) was a factor in recent legal actions in countries subject to the MLAT:
The prosecution of Nobel Prize winner Liu Xiaobo by the Chinese government for, “inciting subversion of state power.”
The recent arrest and prosecutions of non-govermental organizations, including civil rights groups, by the Egyptian government.
The sex discrimination case recently dismissed by a Russian judge who stated that, “If we had no sexual harassment we would have no children.”

This begs the questions, why did the US Supreme Court grant this subpoena request now?  The support and 'special relationship' between the US and the UK has developed a unique flavor as a result of the war on terror.  A kind of complicity.  Despite pressure from senators and then Sec. of State Clinton, as being a politically destabilizing move, this ruling opens the door wider for government to pressure researchers for data.  For me and my colleagues, I can only imagine the consequences.  We are the ones hiking into the hills to ask former child soldiers about their experience, to ask suspected taliban about their motivations, to ask corrupt drug enforcement police about their allegiances, what could possibly go wrong for us or for the people we interview if we are are no longer seen as purely academic researchers?

And what of the critics who say that social science provides no concrete results towards solving war and conflict?  Just because the effects of research informing policy-making are too complex to throw up on a powerpoint slide does not mean they do not exist.  The knowledge gained by investigating the nature of conflict, its intricacies and ramifications, its participants and their motivations, this certainly leads to better planning for preventing conflict and better policy-making when embroiled the unstoppable ones.  What is the alternative?  Not understanding the nature of the thing and making guesses about policies for troops and sanctions and alliances in the dark?

Finally, the amicus brief written by group of concerned social scientists does a wonderful job of outlining several key reasons why this ruling was aggregous and should be added as a point of review on the ethics panels for all researchers in order to understand how their data will be protected at their institutions.  In fact, if you've never read a legal brief (or tried it and hated it), this is the one for you.  It tells a story, makes a compelling argument, and stays well clear of jargon and things like, 'pursuant to code 3.1.c.-3. blah blah.'  Enjoy.




 




Friday, 25 April 2014

The Blind Spot for Big Data

The New York Times has been doing a series of pieces on the uses and limitations of Big Data.   While I do not specifically focus on big data, I look at some of the ways we collect it; therefore, I am interested in the downstream implications once it's aggregated.  How could small distortions at the scale I study become much larger? 

Since I look at conflict, the piece by Somini Sengupta, 'Spreadsheets and Global Mayhem' certainly caught my eye.  The title for the opinion piece about all the ways we are trying to mine data for conflict prevention matches the term 'spreadsheets,' a feeble and not very advanced technology for organizing stuff, against the description 'global mayhem' (for me it evokes Microsoft Excel battling the Palestinian-Israeli conflict.)  The title conveyed the incongruence of strategies centered on big data.   Collecting information, aggregating it, that isn't enough.  The sheer weight of it, the potential feels powerful.  Surely, answers must be in there somewhere?  But finding patterns, asking the right questions, creating really good models with the complex information such as communications data (much of it translated)... that's a long ways off.  We don't really know what to do with what we have, and we don't really know what the answers mean from the models we build.  That's where I think we are.  Most marketing firms vehemently disagree. (Sentiment analysis).  And certainly the types of conflict prediction machines Sengupta references such as the GDELT Project and the University of Sydney's Atrocity Forecasting  believe fortune telling is within our digital grasp.

Another piece by Gary Marcus and Ernest Davis 'Eight (No, Nine!) Problems With Big Data' addresses some of these issues including translation.  They remind the reader of how often the data collected has been 'washed' or 'homogenized' with translation such as the ubiquitous Google Translate.  The original data may appear several times over in new forms because of this tool.  And there is a growing industry of writing about flaws with big data.  The debate has made many who work within the field weary or intensely frustrated because the debate is fueled largely by popular misunderstandings of a very complex undertaking.

From my perspective, there remains a giant blind spot, what I call the invisible variable of culture. Most acutely, this involves the languages now coming online, the languages spoken in regions experiencing a tech boom.  Individuals in these areas must either either participate online and with mobile communication technology in a European language or muddle through a transliteration of their own local language which will not be part of this Big Data mining.  My research looks at the distortions in the narratives they produce in both instances.  The distortion over computer mediated communication such as SMS or smart phone apps which compartmentalize narrative, is a problem about how we organize what we want to say before we say it.  This pre-language process varies by culture and structures how we connect information such as sensory perception.  At the moment, our technology primarily reflects one culture's notion of how to connect information, how to organize it conceptually.  This has implications both in how information technology collects data and how questions about that data are posed and understood.

What if other cultures have a fantastically different concept of organizing information?  How do you know the data you've collected means what you think it means?

[math example: your math is base 10... but other groups might use base 12 or base 2, etc... so when you see their numbers and analyze them with your base 10... they make sense to you but don't mean what they meant originally.]

We haven't cracked the code yet of how to incorporate a variable like culture into software applications.  It's more than translation.  It's not as easy as word replacement.  It's deeper than that.  It's context.  It's at the level of concepts and categories.  The way we see things before we use language.  That's not to say we can't unravel these things with algorithms... but those are often based on (even unconsciously) our understanding of communication.  And there is massively insufficient research on most languages out there.   If there are around 6800 languages, Evans and Levinson (2009) figure that:
Less than 10% of these languages have decent descriptions (full grammars and dictionaries). Consequently, nearly all generalizations about what is possible in human languages are based on a maximal 500 language sample (in practice, usually much smaller – Greenberg’s famous universals of language were based on 30), and almost every new language description still guarantees substantial surprises.
And the languages within the tech boom regions such as Africa and Southeast Asia are certainly part of the knowledge void.  We aren't prepared to collect this data yet.  The data we do collect are basically shoehorned into a format meant for English and for western concepts (like our notions of cause and effect or even time).  Data from these language groups including usage patterns, such as the flu or pregnancy predictor algorithms we've read about, won't be any good without further cultural adaptation.  And when it comes to crunching the data, we have a lot to learn about asking context specific questions and understanding the data from a non-western framework. (My own research results have shown me it's the difference between thinking you've identified a victim or a villain.)

While not widely understood yet, these cultural differences in the Big Data story are a dazzling challenge to consider.


The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf
The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf
The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf
The Global Database of Events, Language, and Tone (GDELT) is an initiative to construct a catalog of human societal-scale behavior and beliefs across all countries of the world, connecting every person, organization, location, count, theme, news source, and event across the planet into a single massive network that captures what's happening around the world, what its context is and who's involved, and how the world is feeling about it, every single day. - See more at: http://gdeltproject.org/index.html#sthash.Tl6hO993.dpuf

 



Friday, 21 March 2014

Another war with privacy

In a recent piece in the NYT's titled:'Talking Out Loud About War, and Coming Home' Karen Zraick described a troubling feature of contemporary American culture-- the failure to discuss the experience of war.  I characterize it as troubling (although perhaps not uniquely American, I argue it is culturally rooted), because the debate about being in war lacks the voices of veterans.

What intrigues me most about this phenomenon, this silence, is how it can happen at all in the culture of new media.  While we capture, catalog, share, comment, repost, and remix every second of our existence, we still avoid this topic.  What is it in our cultural dna that compels us so strongly to keep the experience of war private?

According the Zraick, returning veterans today are seen, but they are not heard, and they are not even asked.  They remain isolated which is detrimental them and the nation. She explains the reasons for the reluctance to engage with veterans:
Civilians said they were reluctant to bring up what the veterans had experienced in combat, for fear of reopening old wounds. One mentioned guilt at not having served, another of growing up with a distant father who had been scarred by war. Some spoke of wanting to reconcile opposition to war with support for those who had fought, or anger about what the veterans had gone through. 
Without including the realities of veterans' experiences, the national discussion surrounding whether to be in a war relies on abtractions, rhetoric on broad ideas of freedom and democracy; it does not include the details of what it means to the human being doing the fighting.  Should this be part of the debate?  Why do we turn away from this?


www.1914.org

In her compilation, How We Are Changed By War: A Study of Letters and Diaries from Colonial Conflicts to Operation Iraqi Freedom (2011) (review), Diana Gill appraoches this question by examining the words of soldiers, support staff, and families over almost amost a century.  She finds there has often been a willing denial about the intensity of the situation we find ourselves in.  She offers several reasons for this avoidance, all of them culturally rooted: we don't complain; we put country first; we don't want to worry family; we stay positive... and many more.  It is a rich trove for sociologists and pyschologists to trace all the factors.  What stands out for me is the contrast to other approaches for addressing painful and violent events-- a contrast in terms of communication style and cultural attititude. 

Take the District Six Museum in South Africa for example.  There the approach is to bring everything out into the light. 
Gone
Buried
Covered by the dust of defeat –
Or so the conquerors believed
But there is nothing that can
Be hidden from the mind
Nothing that memory cannot
Reach or touch or call back.         Don Mattera, 1987 (District Six Museum website)
Not only can you walk through the houses, you can write your story on the wall, you can add your living memory to everyone else's.  The idea (paralleled in other museums in post-conflict zones in Africa) is to provide a space (a sensory environment) to heal and to use the memory of the pain to prevent similar conflicts.  It is the opposite approach in many ways to the American style.  However, I hesitate to generalize one style as open and another as closed.  I think each style is concerned with trust because of the sensitive nature of the topic and is therefore still aimed at an internal or intragroup audience. 

From the observations of Susan Sontag in her 2003 essay, Regarding the Pain of Others, the visualization of pain, the horrors of war and suffereing, these have played a role historically and certainly have a pyschological effect.  Among the cultural turning points for the US would be the televised war in Vietnam and subsequent decsions not to broadcast violent content.  Watching programming today, compare the American Al Jazeera and the Arabic version of the same new story; the coverage of war and violence is strikingly different.  Is the American version sanitized because no Americans are shown to be harmed?  (as in Sontag's title, only the pain of others is shown)  Or is it more humane?  Is it part of the phenomenon of denial of the details of the war experience?  I have more questions than conclusions, but a taboo is a rare bird these days.  It deserves to be investigated more thoroughly.




Wednesday, 12 February 2014

For your eyes only



While I write constantly about adapting technology to other cultures, to make software which is more useful as a tool for information gathering and analysis especially when the information comes in the form of communication or narratives, I may not write enough about how these culture have already adapted.  One of my colleagues who researches security in the Great Lakes Region (DRC/Rwanda) reminded me about the frighteningly sophisticated system which grazes on the western-made media sources, but does not itself rely on them to organize, store or analyze what it finds.
I had experienced similar things in North Africa when giving a police report.  An efficiency of information collection and tracking untethered to computers, or for that matter, text-based record keeping of any kind.  The speed was jaw-dropping to witness.  And a bit scary.  The role of computers, mobile phones, and online platforms was purely to connect with the outside, the west, as an audience.  It was a strategic and sophisticated media manipulation.  I have written a bit about this as a political tactic during the revolution in Tunisia and Egypt during 2011 ; a time when group leaders were rapidly improving their skills for targeting audiences and crafting language-specific messages.  I was also part of projects aimed at using new media to target domestic audiences.  By comparison, these were lackluster in the amount of political energy they generated.  Photo-sharing and film were much more popular than text-based mediums.  I didn't find the 'revolutionary' media strategies to be very rousing for domestic audiences because they didn't work within preference communication modes, i.e., orality.  In sub-Saharan Africa where more languages are tonal, I predict this phenomenon would be even more pronounced (hence my research).  And perhaps one reason there has not been an African Spring similar to the Arab Spring (worth considering among the myriad of reasons....)

In Uganda, where I recently did fieldwork, the profusion of mobile phones is hard to ignore.  If everyone has one and is eager to use it in some fashion, why not get the most out of it rather than remain a data donor?  The responses from participants in my experiment reflected an attitude toward technology as though they engaged with it as partial selves.  As bilinguals, they are able to choose their mode of communication, and for them ICT was not connected to their Acholi-selves. 
“The Europeans are the ones who brought all this. It was not ours,” said a skilled laborer, male age 40+
Using the novel approach from the field of cognitive linguistics, I was able to highlight deficiencies of ICT software from a perspective that could change this sentiment.  Indigenous software could be developed which felt like it spoke to and worked with their Acholi-side rather than forcing them to switch over to their English side.  The advantages of this type of adaptation in design have implications for economic development, information security, and political participation.  Besides retaining non-technology based channels for information which are already efficient, it is imperative that cultural groups address the inherent power imbalance created by perpetually importing foreign methods for capturing information by developing their own.  Controlling the information (by controlling the software codes) could mean changing the power dynamics behind how that information is leveraged in policy-making. 

Tuesday, 14 January 2014

Negative Measure


Built to solve problems.  List deficiencies.  Map crises.  The ICTs for conflict management aggregate the negative and forget to leave a space for the positive.

In the survey I devised to collect descriptive information about a video scene my experiment participants watched, I followed the models of several other ICTs for conflict management and collected information about the individuals perpetrating the actions, the location, the level of damage inflicted, and the level of insecurity participants observed.  However, when I compared the structured answers with oral descriptions, participants often spent time detailing the involvement of bystanders.  Did people offer help?  There seemed to be an expectation of community intervention to calm a situation.  Also, participants were measured in their consideration of the guilt or innocence of the perpetrator.  They offered more than one explanation for the scenario they watched so as to place the motivations, culpability, or even the justification for involvement in mild violence into doubt.

The categories of perpetrator and victim, villain and target were not delineated in the same way as I expected.  The core conceptualization of the event as a 'problem' may in fact be the problem.  This is the initial premise for taking the report.  We want to learn more about it (the problem), about its components, its actors, locations, moving parts, so we can design a solution and prevent its re-occurrence.  What if the local population doesn't perceive a problem?  Or what if they understand the maladjusted components in a manner that is undetectable, or conceptually invisible, with the current ICT approach?  I think it's a matter of the wrong model, not the wrong impulse to improve.

From my experiment, I asked individuals to identify 'the attacker,' the person hitting another man about the head and chasing him through the scene.  Most took this to mean 'who is causing the problem?'  And they identified the man I would have called 'the victim.'  Moreover, several individuals told me they came to this conclusion because this problem-causer/victim was not fighting back but being chased and hit while offering no defense.  This meant he was guilty.  For me, this meant he was in need of help.  This model for recognizing justice, cause-effect, and culpability was foreign to me.  It would be worth doing more experiments around just this concept (perhaps conceptual transfer experiments) and sampling more than just me.

Is it just a matter of thinking up better names for categories?  Better questions to ask on surveys?  It's more than an issue of gathering information quantitatively or qualitatively.

Take for example the issue of the egg, the spear, and the egg-water.  Context is key for meaning.  This is true for any language.  In Luo, tong means spear and egg depending on tone, depending on context.  (The phrase tong pii means 'clean water' in Ethiopian Anuak, but 'egg water' in Kenyan Luo, two closely related Nilotic languages.  Although with negotiation, the phrases could mean water for eggs in either language.  So that's funny.)  The thing about tone and context is that they are reliant on a speaker-listener interaction, a volley, an exchange, a non-solo communication act.  Not like writing.
"Hope" is the thing with feathers—
That perches in the soul—
And sings the tune without the words—
And never stops—at all—.... E. Dickinson, (not a Luo).  
Much of our communication tech has evolved to capture, facilitate, speed and streamline writing, an individualistic form of expression.  It simply can't convey a type of communication with an essential, reverberative quality in which semantic content is as much (or more?) tied to the speaker-listener relationship as it is to anything that can be captured with text. 

Yes, these tools are meant to increase our ability to go into the field and gather information from 1000 individuals instead of 30.  This is great for researchers and polling and participatory governance and all sorts of reasons, but only if the tool is a good tool, that is, it assists us in doing a task we are already doing, makes it simpler, faster, easier in some way.... but if instead it brings us speedily to the wrong results, then what good is it?

The problem comes from the the fact that the tools being used now were built to capture western narratives (or logical constructs, conceptualizations of events) and communicate among NGO staff after disasters.  These same tools have been only slightly modified and then redeployed for conflict use such as post-reconstruction governance surveys, violence reports for election monitoring, etc.  All the while not recognizing the new users have new needs.  New conceptualizations of the events they are describing (and new ways of linking them) may be the key to empowering locally driven solutions and disengaging externally mandated ones.