# Concordance Correlation Coefficient Agreement

As popular as it is, Pearson correlation is only appropriate for measuring the correlation between ui and vi if the two variables follow a linear relationship. If the bivariate result (ui, vi) follows a nonlinear relationship, p⌢ is not an informative measure and difficult to interpret. If the concordance correlation coefficient is calculated on a record N {displaystyle N} -length (d. h. N {displaystyle N} Data values (x n, y) {displaystyle (x_{n}, y_{n}}} for n = 1 ,. . . , Ndisplaystyle n=1,…,N} ) is the form Concordance correlation (CCC) is another measure of concordance, which, unlike CCI, does not initially adopt a common way for judges` evaluations. Thus, it can be used to assess both the degree of compliance and the degree of disagreement. However, one of the main limitations of the CAB is that it only applies to two judges at the same time.

Example 5. Let`s go back to example 4 and leave yi1=ui and yi1=vi. By adapting the model in (9) to the data, we obtain estimates σ⌢β2 = 0 and σ⌢2 = 9.167. Therefore, CCI (by sampling) is based on data p.48 CCI =0, which is very different from pearson correlation. Although judges` assessments are perfectly correlated, the concordance between judges is extremely poor. It can be shown that pCCC=1 (-1) if and only if p=1 (-1), μ1=μ2 and σ12=σ22.  pCCC=1 (-1) if and only if yi1 = (10) yi2 (yi1=-yi2), i.e. if there is a perfect chord (disagreement). The bias correction factor Cb (0≤Cb≤1) in (12) evaluates the degree of distortion, with the smaller Cb reporting greater distortion.

Therefore, unlike CCI, a mismatch can be due to a low correlation (small p) or a large distortion (small Cb). Since ui and vi are linearly related, the Pearson correlation can be applied, resulting in p⌢=1, indicating a perfect correlation. However, the data clearly do not indicate a perfect match; In fact, the two judges hardly agree. Similarly, intraclassical correlation, a popular measure of the adequacy between continuous variables, may not provide sufficient information for examiners if the nature of the mismatch is interesting. This report examines the concepts of coherence and correlation and discusses differences in the application of several frequently used measures. In statistics, the concordance correlation coefficient measures the concordance between two variables, for example. B to assess reproducibility or inter-rater reliability. Note that since pICC≥0, we can either encode some of the judges` ratings or use another index, for example. B the concordance correlation explained below. Comparing (1) and (4), it becomes clear that ρ⌢ is really the pearson correlation when applied to the rankings (qi, ri) of the initial variables (ui, vi).

. . .