We did it! The free, open source, Web-based, university-hosted, FISMA-compliant “Coding Analysis Toolkit” CAT recorded its one millionth coding choice.
Pretty much all the credit goes to Texifter CTO and chief CAT architect Mark Hoy who has put in many paid (and unpaid) hours making sure CAT is reliable, usable, & scalable. Texifter Chief Security Officer Jim Lefcakis also played a key role ensuring the hardware and server room were maintained at the highest level of reliability and security. In honor of this milestone, I have been digging through my unpublished papers looking for material that explains in more detail where CAT, PCAT, DiscoverText, QDAP & Texifter come from. This post is the first in a series about the particular approach to coding text we have come to call the “QDAP method.”
Large political text data collections are coded and used for basic and applied research in social and computational sciences. Yet the manual annotation of the text—the coding of corpora—is often conducted in an ad hoc, inconsistent, non-replicable, invalid and unreliable manner. Even the best intentions to create the possibility for replication can, in practice, confound the most ardent followers of the creed “Replicate, Replicate.” While mechanical, process, documentary, and other challenges exist for all approaches, practitioners of qualitative or text data analysis routinely profess to greater, even insurmountable, barriers to re-using coded data or repeating significant analyses.
There are diverse approaches to coding text. They tend to be hidden away in small niche sub-fields where knowledge of them is limited to a small research community, a project team, or even a single person. While researchers classify text for a variety of reasons, it remains very difficult, for many counter-intuitive, to share these annotations with other researchers, or to work on them with partners from other disciplines for whom the coding may serve an alternate purpose. A change in the way the researchers think about, conduct, and share coded political corpora is overdue.
Coding is expensive, challenging, and too often idiosyncratic. Training and retaining student coders or producing algorithms capable of tens of thousands of reliable and valid observations requires patience, funding, and a framework for measuring and reporting effort and error. Given these factors, it is not surprising that a proprietary model of data acquisition and coding still dominates the social sciences. Despite the important role for the social in social science, researchers guard “their” privately coded text, even the raw data, fearing others will beat them to the publication finish line or challenge the validity of their inferences. The competitive approach to producing and failing to share annotations disables intriguing and highly scalable collaborative social research possibilities enabled by the Internet.
Researchers should seek to enhance and modernize their architecture for large-scale collaborative research using advanced qualitative methods of data analysis. This will require working out and attaining widespread acceptance of Internet-enabled data sharing protocols, as well as the establishment of free, open source platforms for coding text and for sharing and replicating results. We believe that when utilized in combination, “The Dataverse Network Project” and the “Coding Analysis Toolkit” (CAT) represent two important steps advancing that effort. Large-scale annotation projects conducted on CAT can be archived in the Dataverse and as a result will be more easily available for replication, review, or re-adjudication of their original coding.
In Part Two of the Series “Coding Text the QDAP Way,” we’ll say more about the role of scholarly journals advancing this practice of re-using datasets.
Pingback: Coding Text – Part Two | Texifter, LLC. Blog