01 Hme 712 Week 1 Audio Assoc Agree Kappa Statistic 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

01 HME 712 Week 1 Audio Assoc Agree Kappa Statistic 1

Good day and welcome to this slideshow, and it’s all about measuring agreements
when we want to see whether two measures agree. Here we have the outline for this
slideshow. First we're going to explain the difference between association and
agreement. Then we’re going to talk about methods of agreement, and we're going to
focus later, on the Kappa statistic.
On this slide, you see a scatterplot of body mass or weight, plotted against height in a
number of individuals. Clearly there's an association. As height increases, body mass
tends to increase, you can see that on the scatterplot. However, the heights and
weights are not equal, they're not the same. For example, the mean body mass was
65, but the mean height was 165. So they're not equal, they do not agree, but they are
in association.
On this slide, we see four hypothetical examples of situations where we might want to
test or measure the amount of agreement between two data sets carried out on the
same individuals, or by the same individuals. Items one and three involve a continuous
outcome measure, and two and four involve categorical outcome measures. Just note
that in item number four, we talk about a Likert questionnaire. That's the kind of
questionnaire where you have to either agree or disagree with a statement. So you've
got been offered about, say, five boxes, and you have to strongly agree or strongly
disagree, depending on which box you select.
On this slide, we see an arrangement of association measures and agreement
measures based on data type. The association measures which you've already dealt
with in a previous module, you’ll see that for categorical data, we use odds ratios, risk
ratios, relative risks, relative proportions, prevalence odds ratios, and so on. The
numerical data, we use Pearson's correlation coefficient if the data are normally
distributed. And we'll see that for, where the data are numerical but not normally
distributed, we use something called Spearman’s rank correlation coefficient. We will
be covering that a little bit later during this module when we cover nonparametric tests.
For the agreement measures, however, which are the focus of this week, for
categorical data, we have two options. If it's a two by two table, or a binary variable,
we have a Kappa statistic. If it's more than binary, in other words, there are three or
four categories or even more. For agreement, we use a weighted Kappa statistic, this
will become clearer later. For numerical data, we use what's called the methods of
bland and altman. This is a special graphical approach. So you see that there are
different measures that have to be taken. Depending on the kind of data that we've
got. It's very important to first be clear about what kind of data we're dealing with,
before we select the correct measure.
On this slide, we have a hypothetical example. Two doctors examined 20 X-rays, just
to decide whether or not the people who were X-rayed have got tuberculosis. The
problem is that even if out of these 20, the doctors agreed on all of them, which is
unlikely, but let's say that they did. Some of these, they could have agreed just by
guessing. What we want to do is to take away the ones where they might have
guessed, and to look at the agreement in the remainder. In other words, what is the
agreement beyond chance or beyond guessing?
So let's say that we were able to calculate that we think that there were three, possibly
three times that they were guessing that they could have agreed just by guessing
rather. And that leaves another 17 opportunities to agree beyond just what would have
happened by guessing. So we actually are dealing with 17 opportunities beyond
01 HME 712 Week 1 Audio Assoc Agree Kappa Statistic 1

chance. 20 minus three, the three that were possibly just by guessing. If they agree in
the diagnosis of 12 of these 17, then the percentage of agreement beyond chance is
a 100 times 12 over 17, which comes to 70.59%. The Kappa statistic then is 0.71.
So now let's see how we actually work out the Kappa statistic. In this hypothetical
example, we have two different observers. Observation one and observation two, and
they either say something is positive, meaning it's present or negative, meaning it's
absent. You can see looking at that table immediately that they both agree on the 30,
that were positive positive, and on 45, that were negative negative. The remaining
ones, the 15, and the 10, they disagreed. So there are in total 100 opportunities for
them to agree.
The proportion of agreement, which we also call a concordance is simply 30 plus 45.
Those are the ones where they agreed. On 30 of them, they agreed the disease was
present at 45, they agreed the disease was absent, out of 100. That comes to 75% or
.75. However, if they were guessing, they would probably agree on 51 of these 100,
just by chance, except this number for now, please, we'll explain later in the third
slideshow for this week, how this 51 was calculated.
As a result, because 51 might have happened just by chance, we are saying that there
are actually 100 minus 51 opportunities left over to agree beyond chance. In other
words, this is what we need to use to calculate Kappa, because Kappa is about the
agreement beyond chance after we've excluded the guessing or possible guessing.
So in total, they agree on 75 diagnoses, which includes the 51 from guessing, so
Kappa equals 75 minus 51, which is the agreements beyond chance. So instead of
agreeing on 75, because 51 might have been by guessing, we subtract that 51. That
leaves us with 24, 75 minus 51 equals 24, agreements beyond chance, out of 100
minus 51, which is 49, and that comes to 0.49.

You might also like