Much literature shows that the ratings assigned by wine judges are uncertain, some authors have proposed that judges be tested, and a few wine competitions do test judges. However, no literature or competition has yet proposed a test or rating for judges based on realistic competition conditions. This article uses coefficients of multiple correlation to rate each of 54 judges who assigned ratings to 2,811 wines entered in a commercial competition. Results show that there is a strong and positive correlation between the ratings assigned by most judges to most wines. However, those correlations also show that the ratings assigned by approximately 10% of judges are indistinguishable from random assignments. Using correlations to rate the raters, a program is underway to monitor those judges and variations in competition protocol that may affect their ratings.