Coding Text – Part Two

In Part One of the Series “Coding Text the QDAP Way,” I wrote about the problem of idiosyncratic annotation and the lack of diverse, interesting and re-usable annotated data sets. Providing data for replication (when possible) is a requisite for step scientific approach. An important aspect of this is effort is a follow up on the agreements that were made starting in the 1990s among editors of major research journals to require replication datasets and sharing of the specifics of data coding and computer syntax.

This work is now well advanced on the quantitative data sharing frontiers. Developing such an agreement for qualitative data research applications and implementing it consistently among a wide-reaching community of researchers is no simple task. Sharing raw and coded political corpora will lead to better manual and automated text mining and analysis in political science. This is an epoch of highly accessible digital text collections. Blog posts, wikis, YouTube comments, and the like, as well as the full range of digitized traditional media, are vast sources of potentially important political data in text format. A new approach to coding and sharing annotations might help to eviscerate the prevailing perception of a zero-sum game in research, resulting in many new basic and applied research opportunities for political scientists.

The manual annotation of text is a nexus for collaboration by political scientists with computer scientists, and with researchers in allied social sciences as well as in fields such as journalism, literary analysis, library science, and education where the rigorous interrogation of text is a well-established tradition. In particular, researchers in computer science possess the tools, repositories, and methods necessary for managing studies of millions of documents over time. Just like search engines, in a very short time we can expect these emergent human language tools to become irreplaceable elements of the researcher’s electronic desktop. The next generation of language tools will be built with the “ground truth” support of high-quality coding and evaluation studies.

Many researchers from a variety of disciplines stand to benefit from reliably recorded, publicly available, transparent, large-scale annotations. These collections can be produced by properly equipped and trained coders, as well as by active and machine-learning algorithms developed by computer scientists. Yet very few researchers in any discipline can say with confidence that they know where to acquire or how to produce reusable annotated corpora with widespread, multi-disciplinary appeal. Even fewer could imagine freely sharing those hard-earned text annotations with other members of a research community or publishing them on the Web to attract more diverse and sustained scholarly attention.

There is some evidence that making data available increases citation. Although a strong tradition is emerging among leading social science journals whereby scholars post their statistical data and models in repositories for those who would replicate their experiments and calculations, the same cannot currently be said about text annotations, other forms of qualitative work, and even raw text datasets. As a result, there is a dearth of well-coded contemporary and historical text datasets. This is only partly due to fact that the manual annotation of text can be conceptually very difficult, if not a bit controversial, expensive, and too often unsuccessful.

It is often dreary work, a characteristic that further encourages the use of unsupervised machine annotation when possible. More fundamentally, however, only limited guidance exists in the scholarly literature about how best to recruit, train, equip, and supervise coders to get them to produce useful annotations that serve multiple research agendas in divergent disciplines. As Eduard Hovy (Computer Science, University of Southern California-Information Sciences Institute) regularly points out, researchers need a formal science of annotation focused on cross-disciplinary text mining activities. Carefully and transparently coded corpora are a viable bridge to collaboration with computer science and computational linguistics and can open up new possibilities for large-scale text analysis.

In the third and final part of this series, we look at the quest for the elusive “gold standard” in human annotation.

About Stuart Shulman

Stuart Shulman is a political science professor, software inventor, entrepreneur, and garlic growing enthusiast who coaches U13 boys club soccer and in the Olympic Development Program with a national D-license. He is Founder & CEO of Texifter, LLC, Director of QDAP-UMass, and Editor Emeritus of the Journal of Information Technology & Politics. Stu is the proud owner of a Bernese/Shepherd named "Colbert" who is much better known as 'Bert. You can follow his exploits @stuartwshulman.
This entry was posted in general and tagged , , , , , , . Bookmark the permalink.

Comments are closed.