At Dedoose, we take the business of developing code systems very seriously, and we aren’t the only ones. A quick Google search for ‘coding qualitative data’ yielded about 1.1M hits and ‘qualitative code system development’ landed me about 12M hits. Clearly people are thinking and communicating a great deal on this most central aspect of sound qualitative and mixed method data analysis.
Last year we did a very well received topical webinar titled, ‘Toward Building a Strong and Useful Coding System.’ The topic comes up so regularly in our webinars, lectures, and conversations that we thought we’d revisit it by summarizing that presentation here and inviting anyone interested to access the webinar recording itself (click here to see the recording).
With full disclosure here, at Dedoose and in the UCLA Center for Culture and Health (where I wear my other hat as an Associate Research Psychologist and co-direct a methods consulting Lab with Dr. Tom Weisner), we believe strongly in the systematic and rigorous application of research methods—whatever methods you employ. In general, we use methods primarily as accepted ways to take us closer to the phenomena under study and to generate findings that we can use to communicate to the consumers of our work, our research audience. When methods are used well, we can have more confidence in our findings and, we hope, our audience will have more confidence in what we deliver. Your code system becomes the conceptual framework you will use to organize your qualitative data, understand them, and then communicate the results of your work…pretty central role, huh? So, good science in qualitative and mixed methods work depends on a solid code system.
Ok, so what do we think a solid code system looks like?
First a clarification on some terms: ‘Codes’ or ‘Tags?’ In the Dedoose app, we use these terms interchangeably because they function identically in a Dedoose project. Essentially, they are the labels we use to attach to meaningful sections of our qualitative media so we can retrieve that content at some later time. In our thinking, ‘Tags’ are often developed and used somewhat indiscriminately as people initially delve into their data and find things of interest—and when approached this way, people often end up with lot of tags with each being used relatively infrequently. Sets of tags can later be useful to develop into proper codes. However, given the way we sometimes see users doing this work, this can be difficult unless each tag is very transparent as to the content it was associated with. ‘Codes,’ we would argue, must be developed more systematically into a label we can use to tag content meaningful to our research questions, can be used consistently by different members of our research teams, and can help categorize data across our research population—that is, they help tell a broader story.
So, somewhat briefly, here’s how we develop our code systems.
- Candidate codes are generally identified in one of two ways:
- ‘a priori’ or ‘etic’—predetermined based on theory, an interview protocol, or some other manner not directly connected to the data and
- ‘in vivo’ or ‘emic’—more emergent in nature as they are inspired by what is found directly in the data and help us see and understand things we did not anticipate.
- Code viability is then developed through an iterative process of hypothesis generation, hypothesis testing, modification, re-considering, re-testing ….until we’ve got something that we can consistently apply to the real data in our project and, we believe, captures something meaningful across our research population that speaks to our research questions. For example, say we come up with a code called ‘religiosity’ and define its use to ‘tag the stuff people tell us about religious rituals they engage in at home each day.’ But what if some people say they only do these activities 5 days a week? What if others say they sometimes do these activities at home and sometimes they do them in a temple? What if we see this in the reports of the first three of our participants, but none of the other 50 people talk at all about religious rituals engaged in at home? This is a simple example of the kinds of questions we should be asking as we develop the rules for how our codes are defined and the criteria we will use to decide if they are to be applied or not. This rule book, or code book, should evolve as themes develop and we then keep testing and modifying the rules as we explore if the codes will be useful for all the kinds of data we collect from our participants. It is easy to come up with nice dictionary definitions for the codes we think we’ll be using. But, dang it!, people use their own rich and varied language in how they communicate with us during our research and it can be tricky to try to map our concepts to the data we actually collect which is often so wonderfully and qualitatively messy.
- Do this deeply enough, however, and you end up with a code system that stabilizes as you develop enough codes to capture all the important content you find in your qualitative data in order to address your questions. It looks a little like this:
Code ‘X’ Excerpts?Take code ‘X.’ The green excerpts are easy to code, because they are so central to your concept; yellow excerpts are where the questions start to arise; and the red ones are more clearly out. So it’s the boundaries between green and red where you need to establish some clear rules based on what you find in the real data—and keeping some examples in your codebook can be very helpful too.
- Then, to demonstrate to yourselves and so you can make strong arguments to your audience, we believe in testing for inter-rater reliability (check out the Dedoose Training Center for an assist here). Blind coding of the same content by two or more coders and finding you all do essentially the same job for all your important codes suggests you’ve got a rock solid code system. There are a variety of ways to do this, but one is to calculate Cohen’s Kappa coefficient which is a widely recognized and respected index for inter-rater reliability (and the primary result from a test in the Dedoose Training Center). And even if you are working alone, it is worth finding a friend, colleague, or your grandmother to see if you can explain things well enough for them to do a pretty good job on some sample data so you can feel confident you are clearly articulating what you are identifying.
- Good Parent Codes—The Skeleton. These codes:
- Represent broader buckets of meaning in your qualitative data
- Are typically easier to apply reliably
- Are useful for responses from a wider majority of your research population
- Help define the major concepts related to your research questions
- Child Codes—The Meat on the Bones. These codes:
- Are more nuances and, thus, more difficult to apply reliably
- Represent the richer details in the data
- Are often where the story is told whereas the parents organize the story
- Grandchild, Great Grandchild,… Codes/Tags—The Dressing. These codes/tags:
- Are much more idiosyncratic and MUCH harder to apply reliably
- Are associated with often interesting but more anecdotal stories
- Used very infrequently
- And to bring it full circle, can be useful in tagging content for later retrieval, BUT may not ultimately be considered proper codes as we’ve defined them here.
- Good Parent Codes—The Skeleton. These codes:
Using the same bell-curve illustration model from above, you can think about the variation of excerpts as you get increasingly nuanced with your codes. The more the nuance the more challenging it is to define the code application rules and communicate them to others. That is, fewer fuzzy yellow (is this in or is it out?) excerpts at the parent level, more trouble with those boundaries at the child level, and even more so at the grandchild level:
Parent Code ‘X’
Child Code ‘Y’
Grandchild Code ‘Z’
So, in short, You and Your Team are the Method:
- Identify the concepts you need that can really be found in the data
- Build out contextualized code application rules
- Practice, practice, practice
- Document, document, document
- Establish strong inter-rater reliability
Nail all that down and Go TEAM GO! Divide and Conquer!;