Dedoose is an easy-to-use, collaborative, web based application that facilitates all types of research data management and analysis.
Here's what you need to know about how to use it.
The Dedoose Training Center is a unique feature designed to assist research teams in building and maintaining inter-rater reliability for both code (the application of codes to excerpts) and code weighting/rating (the application of specified weighting/rating scales associated with code application). Training sessions (‘tests’) are specified based on the coding and rating of an ‘expert’ coder/rater. Creating a training session is as simple as selecting the codes to be included in the ‘test,’ the selection of previously coded/rated excerpts to comprise the test, and then specifying a name and description for the test. ‘Trainees’ access the session and are prompted to apply codes or weights to the set of excerpts making up the session. During the exercise, the trainee is blind to the work that was done by the expert. Upon session completion, results present Cohen’s Kappa coefficient for code application and Pearson’s correlation coefficient for code weighting/ratings overall and for each individual code as indexes of the inter-rater reliability as well as details of where the ‘trainee’ and ‘expert’ agreed or disagreed.
Instructions on creating tests greet you when first accessing the Training Center Workspace. To create a new test:
Tip: It is often most useful to focus the analysis of inter-rater reliability on those codes which are most important to the research questions, that are used on a relative frequent basis, and that are associated with a well-documented set of application criteria (the rules for when the code is most appropriately [or not] applied to a particular excerpt). In the Dedoose Training Center, Cohen’s Kappa is calculated on an 'event' basis, each excerpt being an event and each code either applied or not applied to the excerpt. Thus, if there are many events for which a particular code is not applied (and not appropriate for) there will be frequent agreements between the ‘expert’ and the ‘trainee’ in not applying the code which can disproportionately (and misleadingly) inflate the resulting Cohen’s Kappa coefficient.
Once a test is saved to the training center test library, a trainee can take the test by clicking the particular test in the list and then clicking the ‘Take this test’ button in the lower right corner of the pop-up.
Taking a Code Application Training Test
In a code application test, the trainee is presented with the each excerpt and the codes designated for the test and then expected to apply the appropriate code(s) to each excerpt. They can move back and forth through the test using the ‘Back’ and ‘Next’ buttons until they are finished. Here’s a screenshot of a test excerpt before any codes have been applied:
Upon completion the result of the test are presented, including a Pooled Cohen’s Kappa coefficient and Cohen’s Kappa for each of the codes included in the test, along with documentation and citations for interpreting and reporting these inter-rater reliability results in manuscripts and reports.
Further, by clicking the ‘View Code Applications’ button, teams can review, excerpt by excerpt, how the ‘expert’ and ‘trainee’s’ code applications corresponded. This information is invaluable in developing and documenting code application criteria and in building and maintaining desirable levels of inter-rater reliability across team members.
Taking a Code Weighting/Rating Test
As described above, setting up a code weighting/rating test is identical to that of a code application test, though when selecting the codes to include, only those with code weighting activated will be available. Again, once a test is saved to the training center test library, a trainee can take the test by clicking the particular test in the list and then clicking the ‘Take this test’ button in the lower right corner of the pop-up.
In a code weighting/rating test, the trainee is presented with the each excerpt and all codes that were applied by the ‘expert’ and then expected to adjust the weight on each code to the appropriate level for each excerpt. They can move back and forth through the test using the ‘Back’ and ‘Next’ buttons until they are finished.
Here’s a screenshot of a test excerpt before any weights have been set, those shown are the default value:
Upon completion the result of the test are presented, including:
Finally, as with the code application test, clicking the ‘View Code Applications’ button, teams can review, excerpt by excerpt, how the ‘expert’ and ‘trainee’s’ weighting corresponded and can be discussed among the team, toward developing the criteria for and establishing consistency in the code weighting/rating process.
At times you may want to work on the same document as someone else, but don’t want to be influenced by excerpting and coding decisions of others on your team. Or perhaps you want to build your code tree in a collaborative way in context before using the Training Center to more formally test for inter-rater reliability (for more information, see the section on the Training Center). Whatever the reason, here are the steps to what we call ‘coding blind.’ Basically, you are simply turning off the work of others (or removing from view) before you begin your work.
To save this (or any) filter:
To reactivate a saved filter:
To deactivate a filter:
Note: that closing Dedoose will also serve to clear any active filters.
Beyond the Dedoose Training Center and coding blind strategy, taking advantage of our document cloning feature is one more way you can work with your team toward building inter-rater reliability.
Keep in mind that creating and tagging excerpts involves two important steps:
There are times when teams want to focus only on the code application decisions. So, Dedoose allows for cloning documents with all excerpts in place. This feature was designed for teams wishing to compare coding decisions on a more ‘apples to apples’ basis. Here's how it works:
This approach has the advantage of direct comparison of coding decision within the context of a document's content.