World Library  
Flag as Inappropriate
Email this Article

Mann–Whitney U test

Article Id: WHEBN0000433164
Reproduction Date:

Title: Mann–Whitney U test  
Author: World Heritage Encyclopedia
Language: English
Subject: Student's t-test, Effect size, Statistics, MedCalc, Statistical tests
Collection: Nonparametric Statistics, Statistical Tests, U-Statistics
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Mann–Whitney U test

In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that two samples come from the same population against an alternative hypothesis, especially that a particular population tends to have larger values than the other.

It can be applied on unknown distributions contrary to t-test which has to be applied only on normal distributions, and it is nearly as efficient as the t-test on normal distributions.

The Wilcoxon rank-sum test is not the same as the Wilcoxon signed-rank test, although both are nonparametric and involve summation of ranks.

Contents

  • Assumptions and formal statement of hypotheses 1
  • Calculations 2
  • Properties 3
  • Examples 4
    • Illustration of calculation methods 4.1
    • Illustration of object of test 4.2
  • Normal approximation 5
  • Effect sizes 6
    • Common language effect size 6.1
    • Rank-biserial correlation 6.2
  • Relation to other tests 7
    • Comparison to Student's t-test 7.1
    • Area-under-curve (AUC) statistic for ROC curves 7.2
    • Different distributions 7.3
      • Alternatives 7.3.1
  • History 8
  • Related test statistics 9
    • Kendall's τ 9.1
    • ρ statistic 9.2
  • Example statement of results 10
  • Implementations 11
  • See also 12
  • Notes 13
  • References 14
  • External links 15

Assumptions and formal statement of hypotheses

Although Mann and Whitney[1] developed the MWW test under the assumption of continuous responses with the alternative hypothesis being that one distribution is stochastically greater than the other, there are many other ways to formulate the null and alternative hypotheses such that the MWW test will give a valid test.[2]

A very general formulation is to assume that:

  1. All the observations from both groups are independent of each other,
  2. The responses are ordinal (i.e. one can at least say, of any two observations, which is the greater),
  3. Under the null hypothesis H0, the probability of an observation from the population X exceeding an observation from the second population Y equals the probability of an observation from Y exceeding an observation from X : P(X > Y) = P(Y > X) or P(X > Y) + 0.5·P(X = Y) = 0.5. A stronger null hypothesis commonly used is "The distributions of both populations are equal" which implies the previous hypothesis.
  4. The alternative hypothesis H1 is "the probability of an observation from the population X exceeding an observation from the second population Y is different from the probability of an observation from Y exceeding an observation from X : P(X > Y) ≠ P(Y > X)." The alternative may also be stated in terms of a one-sided test, for example: P(X > Y) > P(Y > X)

Under more strict assumptions than those above, e.g., if the responses are assumed to be continuous and the alternative is restricted to a shift in location (i.e. F1(x) = F2(x + δ)), we can interpret a significant MWW test as showing a difference in medians. Under this location shift assumption, we can also interpret the MWW as assessing whether the Hodges–Lehmann estimate of the difference in central tendency between the two populations differs from zero. The Hodges–Lehmann estimate for this two-sample problem is the median of all possible differences between an observation in the first sample and an observation in the second sample.

Calculations

The test involves the calculation of a statistic, usually called U, whose distribution under the null hypothesis is known. In the case of small samples, the distribution is tabulated, but for sample sizes above ~ 20 approximation using the normal distribution is fairly good. Some books tabulate statistics equivalent to U, such as the sum of ranks in one of the samples, rather than U itself.

The U test is included in most modern statistical packages. It is also easily calculated by hand, especially for small samples. There are two ways of doing this.

Method one:

For comparing two small sets of observations, a direct method is quick, and gives insight into the meaning of the U statistic, which corresponds to the number of wins out of all pairwise contests (see the tortoise and hare example under Examples below). For each observation in one set, count the number of times this first value wins over any observations in the other set (the other value loses if this first is larger). Count 0.5 for any ties. The sum of wins and ties is U for the first set. U for the other set is the converse.

Method two:

For larger samples:

  1. Assign numeric ranks to all the observations, beginning with 1 for the smallest value. Where there are groups of tied values, assign a rank equal to the midpoint of unadjusted rankings [e.g., the ranks of (3, 5, 5, 9) are (1, 2.5, 2.5, 4)].
  2. Now, add up the ranks for the observations which came from sample 1. The sum of ranks in sample 2 is now determinate, since the sum of all the ranks equals N(N + 1)/2 where N is the total number of observations.
  3. U is then given by:[3]
U_1=R_1 - {n_1(n_1+1) \over 2} \,\!
where n1 is the sample size for sample 1, and R1 is the sum of the ranks in sample 1.
Note that it doesn't matter which of the two samples is considered sample 1. An equally valid formula for U is
U_2= R_2 - {n_2(n_2+1) \over 2} \,\!
The smaller value of U1 and U2 is the one used when consulting significance tables. The sum of the two values is given by
U_1 + U_2 = R_1 - {n_1(n_1+1) \over 2} + R_2 - {n_2(n_2+1) \over 2}. \,\!
Knowing that R1 + R2N(N + 1)/2 and Nn1 + n2 , and doing some algebra, we find that the sum is
U_1 + U_2 = n_1n_2. \,\!

Properties

The maximum value of U is the product of the sample sizes for the two samples. In such a case, the "other" U would be 0.

Examples

Illustration of calculation methods

Suppose that Aesop is dissatisfied with his classic experiment in which one tortoise was found to beat one hare in a race, and decides to carry out a significance test to discover whether the results could be extended to tortoises and hares in general. He collects a sample of 6 tortoises and 6 hares, and makes them all run his race at once. The order in which they reach the finishing post (their rank order, from first to last crossing the finish line) is as follows, writing T for a tortoise and H for a hare:

T H H H H H T T T T T H

What is the value of U?

  • Using the direct method, we take each tortoise in turn, and count the number of hares it beats, getting 6, 1, 1, 1, 1, 1, which means that U = 11. Alternatively, we could take each hare in turn, and count the number of tortoises it beats. In this case, we get 5, 5, 5, 5, 5, 0, so U = 25. Note that the sum of these two values for U is 36, which is 6 × 6.
  • Using the indirect method:
rank the animals by the time they take to complete the course, so give the first animal home rank 12, the second rank 11, and so forth.
the sum of the ranks achieved by the tortoises is 12 + 6 + 5 + 4 + 3 + 2 = 32.
Therefore U = 32 − (6×7)/2 = 32 − 21 = 11 (same as method one).
the sum of the ranks achieved by the hares is 11 + 10 + 9 + 8 + 7 + 1 = 46, leading to U = 46 − 21 = 25.

Illustration of object of test

A second example race illustrates the point that the Mann–Whitney does not test for inequality of medians, but rather for difference of distributions. Consider another hare and tortoise race, with 19 participants of each species, in which the outcomes are as follows, from first to last past the finishing post:

H H H H H H H H H T T T T T T T T T T H H H H H H H H H H T T T T T T T T T

If we simply compared medians, we would conclude that the median time for tortoises is less than the median time for hares, because the median tortoise here comes in at position 19, and thus actually beats the median hare, which comes in at position 20. However, the value of U is 100 (using the quick method of calculation described above, we see that each of 10 tortoises beats each of 10 hares, so U = 10 × 10). Consulting tables, or using the approximation below, we find that this U value gives significant evidence that hares tend to have lower completion times than tortoises (p < 0.05, two-tailed). Obviously these are extreme distributions that would be spotted easily, but in larger samples something similar could happen without it being so apparent. Notice that the problem here is not that the two distributions of ranks have different variances; they are mirror images of each other, so their variances are the same, but they have very different skewness.

Normal approximation

For large samples, U is approximately normally distributed. In that case, the standardized value

z = \frac{ U - m_U }{ \sigma_U }, \,

where mU and σU are the mean and standard deviation of U, is approximately a standard normal deviate whose significance can be checked in tables of the normal distribution. mU and σU are given by

m_U = \frac{n_1 n_2}{2}, \, and
\sigma_U=\sqrt{n_1 n_2 (n_1 + n_2+1) \over 12}. \,

The formula for the standard deviation is more complicated in the presence of tied ranks; the full formula is given in the text books referenced below. However, if the number of ties is small (and especially if there are no large tie bands) ties can be ignored when doing calculations by hand. The computer statistical packages will use the correctly adjusted formula as a matter of routine.

Note that since U1 + U2 = n1 n2, the mean n1 n2/2 used in the normal approximation is the mean of the two values of U. Therefore, the absolute value of the z statistic calculated will be same whichever value of U is used.

Effect sizes

It is a widely recommended practice for scientists to report an effect size for an inferential test.[4][5]

Common language effect size

One method of reporting the effect size for the Mann-Whitney U test is with the common language effect size.[6] As a sample statistic, the common language effect size is computed by forming all possible pairs between the two groups, then finding the proportion of pairs that support a hypothesis.[7] To illustrate, in a study with a sample of ten hares and ten tortoises, the total number of pairs is ten times ten or 100 pairs of hares and tortoises. Suppose the results show that the hare ran faster than the tortoise in 90 of the 100 sample pairs; in that case, the sample common language effect size is 90%. This sample value is an unbiased estimator of the population value, so the sample suggests that the best estimate of the common language effect size in the population is 90%.[8]

Rank-biserial correlation

A second method of reporting the effect size for the Mann-Whitney U test is with a measure of rank correlation known as the rank-biserial correlation. Edward Cureton introduced and named the measure.[9] Like other correlational measures, the rank-biserial correlation can range from minus one to plus one, with a value of zero indicating no relationship. Dave Kerby [6] introduced the simple difference formula to compute the rank-biserial correlation from the common language effect size: the correlation is the difference between the proportion of pairs that support the hypothesis minus the proportion that do not. Stated another way, the correlation is the difference between the common language effect size and its complement. For example, consider the example where hares run faster than tortoises in 90 of 100 pairs. The common language effect size is 90%, so the rank-biserial correlation is 90% minus 10%, and the rank-biserial r = .80.

Hans Wendt [10] described a formula to compute the rank-biserial from the Mann-Whitney U and the sample sizes of each group: r = 1 – (2U)/ (n1 * n2). This formula is useful when the data are not available, but when there is a published report, because U and the sample sizes are routinely reported. Using the example above with 90 pairs that favor the hares and 10 pairs that favor the tortoise, U is the smaller of the two, so U = 10. The Wendt formula is then r = 1 - (2*10) / (10 * 10) = .80, which of course is the same result as with the Kerby simple difference formula.

Relation to other tests

Comparison to Student's t-test

The U test is more widely applicable than independent samples Student's t-test, and the question arises of which should be preferred.

Ordinal data
U remains the logical choice when the data are ordinal but not interval scaled, so that the spacing between adjacent values cannot be assumed to be constant.
Robustness
As it compares the sums of ranks,[11] the Mann–Whitney test is less likely than the t-test to spuriously indicate significance because of the presence of outliers – i.e. Mann–Whitney is more robust.
Efficiency
When normality holds, MWW has an (asymptotic) efficiency of 3/\pi or about 0.95 when compared to the t test.[12] For distributions sufficiently far from normal and for sufficiently large sample sizes, the MWW is considerably more efficient than the t.[13]

Overall, the robustness makes the MWW more widely applicable than the t test, and for large samples from the normal distribution, the efficiency loss compared to the t test is only 5%, so one can recommend MWW as the default test for comparing interval or ordinal measurements with similar distributions.

The relation between efficiency and power in concrete situations isn't trivial though. For small sample sizes one should investigate the power of the MWW vs t.

MWW will give very similar results to performing an ordinary parametric two-sample t test on the rankings of the data.[14]

Area-under-curve (AUC) statistic for ROC curves

The U statistic is equivalent to the area under the receiver operating characteristic curve that can be readily calculated.[15][16]

AUC_1 = {U_1 \over n_1n_2}

Because of its probabilistic form, the U statistic can be generalised to a measure of a classifier's separation power for more than two classes:[17]

M = {1 \over c(c-1)} \sum AUC_{k,l}

Where c is the number of classes, and the R_{k,l} term of AUC_{k,l} considers only the ranking of the items belonging to classes k and l (i.e., items belonging to all other classes are ignored) according to the classifier's estimates of the probability of those items belonging to class k. AUC_{k,k} will always be zero but, unlike in the two-class case, generally AUC_{k,l} \ne AUC_{l,k}, which is why the M measure sums over all (k, l) pairs, in effect using the average of AUC_{k,l} and AUC_{l,k}.

Different distributions

If one is only interested in stochastic ordering of the two populations (i.e., the concordance probability P(Y > X)), the U test can be used even if the shapes of the distributions are different. The concordance probability is exactly equal to the area under the receiver operating characteristic curve (ROC) that is often used in the context.

Alternatives

If one desires a simple shift interpretation, the U test should not be used when the distributions of the two samples are very different, as it can give erroneously significant results.[18] In that situation, the unequal variances version of the t test is likely to give more reliable results, but only if normality holds.

Alternatively, some authors (e.g. Conover) suggest transforming the data to ranks (if they are not already ranks) and then performing the t test on the transformed data, the version of the t test used depending on whether or not the population variances are suspected to be different. Rank transformations do not preserve variances, but variances are recomputed from samples after rank transformations.

The Brown–Forsythe test has been suggested as an appropriate non-parametric equivalent to the F test for equal variances.

See also Kolmogorov–Smirnov test.

History

The statistic appeared in a 1914 article [19] by the German Gustav Deuchler (with a missing term in the variance).

As a one-sample statistic, the signed rank was proposed by Frank Wilcoxon in 1945,[20] with some discussion of a two-sample variant for equal sample sizes, in a test of significance with a point null-hypothesis against its complementary alternative (that is, equal versus not equal).

A thorough analysis of the statistic, which included a recurrence allowing the computation of tail probabilities for arbitrary sample sizes and tables for sample sizes of eight or less appeared in the article by Henry Mann and his student Donald Ransom Whitney in 1947.[1] This article discussed alternative hypotheses, including a stochastic ordering (where the cumulative distribution functions satisfied the pointwise inequality F_X(t) < F_Y(t) ). This paper also computed the first four moments and established the limiting normality of the statistic under the null hypothesis, so establishing that it is asymptotically distribution-free.

Related test statistics

Kendall's τ

The U test is related to a number of other non-parametric statistical procedures. For example, it is equivalent to Kendall's τ correlation coefficient if one of the variables is binary (that is, it can only take two values).

ρ statistic

A statistic called ρ that is linearly related to U and widely used in studies of categorization (discrimination learning involving concepts), and elsewhere,[21] is calculated by dividing U by its maximum value for the given sample sizes, which is simply n1 × n2. ρ is thus a non-parametric measure of the overlap between two distributions; it can take values between 0 and 1, and it is an estimate of P(Y > X) + 0.5 P(Y = X), where X and Y are randomly chosen observations from the two distributions. Both extreme values represent complete separation of the distributions, while a ρ of 0.5 represents complete overlap. The usefulness of the ρ statistic can be seen in the case of the odd example used above, where two distributions that were significantly different on a U-test nonetheless had nearly identical medians: the ρ value in this case is approximately 0.723 in favour of the hares, correctly reflecting the fact that even though the median tortoise beat the median hare, the hares collectively did better than the tortoises collectively.

Example statement of results

In reporting the results of a Mann–Whitney test, it is important to state:

  • A measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney is an ordinal test, medians are usually recommended)
  • The value of U
  • The sample sizes
  • The significance level.

In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run,

"Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–Whitney U = 10.5, n1 = n2 = 8, P < 0.05 two-tailed)."

A statement that does full justice to the statistical status of the test might run,

"Outcomes of the two treatments were compared using the Wilcoxon–Mann–Whitney two-sample rank-sum test. The treatment effect (difference between treatments) was quantified using the Hodges–Lehmann (HL) estimator, which is consistent with the Wilcoxon test.[22] This estimator (HLΔ) is the median of all possible differences in outcomes between a subject in group B and a subject in group A. A non-parametric 0.95 confidence interval for HLΔ accompanies these estimates as does ρ, an estimate of the probability that a randomly chosen subject from population B has a higher weight than a randomly chosen subject from population A. The median [quartiles] weight for subjects on treatment A and B respectively are 147 [121, 177] and 151 [130, 180] kg. Treatment A decreased weight by HLΔ = 5 kg (0.95 CL [2, 9] kg, 2P = 0.02, ρ = 0.58)."

However it would be rare to find so extended a report in a document whose major topic was not statistical inference.

Implementations

In many software packages, the Mann–Whitney test (of the hypothesis of equal distributions against appropriate alternatives) has been poorly documented. Some packages incorrectly treat ties or fail to document asymptotic techniques (e.g., correction for continuity). A 2000 review discussed some of the following packages:[23]

  • MATLAB has ranksum in its Statistics Toolbox.
  • R's statistics base-package implements the test wilcox.test in its COIN package.
  • SAS implements the test in its PROC NPAR1WAY procedure.
  • Python (programming language) has an implementation of this test provided by SciPy[24]
  • SigmaStat (SPSS Inc., Chicago, IL)
  • SYSTAT (SPSS Inc., Chicago, IL)
  • JMP (SAS Institute Inc., Cary, NC)
  • S-Plus (MathSoft, Inc., Seattle, WA)
  • STATISTICA (StatSoft, Inc., Tulsa, OK)
  • UNISTAT (Unistat Ltd, London)
  • SPSS (SPSS Inc, Chicago)
  • Arcus Quickstat (Research Solutions, Cambridge, UK)
  • Stata (Stata Corporation, College Station, TX) implements the test in its ranksum command.
  • StatXact (Cytel Software Corporation, Cambridge, MA)
  • PSPP implements the test in its WILCOXON function.

See also

Notes

  1. ^ a b
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^ a b Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Innovative Teaching, volume 3, article 1. doi:10.2466/11.IT.3.1. link to pdf
  7. ^ McGraw, K.O., & Wong, S.P. (1992). A common language effect size statistic. Psychological Bulletin, volume 111(2), pages 361-365. doi:10.1037/0033-2909.111.2.361
  8. ^
  9. ^ Cureton, E.E. (1956). Rank-biserial correlation. Psychometrika, volume 21(3), pages 287-290. doi:10.1007/BF02289138
  10. ^ Wendt, H. W. (1972). Dealing with a common problem in social science: A simplified rank-biserial coefficient of correlation based on the U statistic. European Journal of Social Psychology, 2(4), 463–465. doi:10.1002/ejsp.2420020412
  11. ^ Motulsky, Harvey J.; Statistics Guide, San Diego, CA: GraphPad Software, 2007, p. 123
  12. ^ Lehamnn, Erich L.; Elements of Large Sample Theory, Springer, 1999, p. 176
  13. ^ Conover, William J.; Practical Nonparametric Statistics, John Wiley & Sons, 1980 (2nd Edition), pp. 225–226
  14. ^
  15. ^
  16. ^
  17. ^
  18. ^
  19. ^
  20. ^
  21. ^
  22. ^
  23. ^
  24. ^

References

External links

  • Table of critical values of U (pdf)
  • Interactive calculator for U and its significance
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from World Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.