Lotus (BETA)

Lotus is a reliability coefficient which is constructed simply and easily interpretable. The coefficient is based on the agreement with the most commonly coded value (MCCV) and is determined for two or more coder within each separate coding unit.

The Manual contains instructions on how you can easily use the Lotus SPSS custom dialog package to calculate Lotus in an unstandardized and standardized form at any level of measurement.

In addition, it analyzes several common obstacles occurring in content analysis:

  • Lotus can be applied to categorical, ordinal or metrical scales.
  • The calculation of Lotus is easier to understand than the calculation of Krippendorff’s alpha.
  • The quality of the codebook can be differentiated from the quality of the coder because reliability values can be determined for each coder.
  • In contrast to reliability coefficients based on pairwise comparison, incorrect values will not be positively factored into reliability.
  • Accuracy can be displayed as a comparison with a gold standard and is uniform to the intercoder coefficient Lotus.
  • For hierarchical variables, it is easy to display the hierarchical level of a given reliability.
  • The reliability of rare phenomena can be calculated.
  • Data records do not have to be restructured for Lotus. Coders' data records are simply merged with one another.

Monte Carlo Simulation

With this macro the characteristics of the Lotus coefficient are displayed based on a Monte Carlo simulation and compared with Krippendorff's alpha. For this purpose, data with the following prescribed characteristics were compiled (simulated):

  • its number of categories,
  • the likelihood that coders will produce coding that is in agreement,
  • the likelihood of its agreement with a prescribed gold standard.

In addition to the prescribed agreements, there are also chance agreements. The study simulated how strongly coders' actions are determined by codebook guidelines and training and to what extent additional chance agreements occur. If, for example, agreement for a variable with two characteristics is supposed to be 50%, there is still a possibility of 25% random agreement, as there is still an even likelihood that the values of the remaining 50% will also be in agreement.

There were 1,000 code units with 20 coders simulated. Above all, the scale of the coding units is larger than in conventional reliability tests. But because Monte Carlo simulations deal with random processes, the character of the coefficients can be more clearly recognized when large random samples are simulated.