What Is a Right-Tailed Test? Definition & Examples

A right-tailed test is a type of hypothesis test where you’re specifically checking whether a value is greater than some claimed number. Instead of looking for any difference (higher or lower), you’re only interested in detecting an increase. The “right-tailed” name comes from where the action happens on a bell curve: the rejection region sits in the right tail of the distribution.

How a Right-Tailed Test Is Set Up

Every hypothesis test starts with two competing statements. The null hypothesis (H₀) is the default claim, typically that a population mean equals some known value k. The alternative hypothesis (H₁) is what you’re trying to find evidence for. In a right-tailed test, the alternative hypothesis uses a greater-than sign:

  • H₀: μ = k (the mean equals the claimed value)
  • H₁: μ > k (the mean is actually larger)

That greater-than sign is what makes it “right-tailed.” You’re betting the true value is higher than what’s claimed, and you’re designing the entire test to detect that specific direction of difference. If the true value turned out to be lower, this test would not flag it, by design.

When You’d Use One

You choose a right-tailed test when you have a clear reason to care only about increases. A few examples make this concrete. Suppose a pharmaceutical company claims its new drug raises average blood oxygen levels above a baseline. The researchers don’t care if the drug lowers oxygen (that would be a different safety question). They want to detect whether it raises it. That’s a right-tailed test.

Or imagine a factory manager suspects a machine is overfilling bottles beyond the labeled 500 mL. Underfilling would be a separate concern. Right now, the question is specifically whether the mean fill volume exceeds 500 mL. Again, the alternative hypothesis points to the right: μ > 500.

A business analyst might test whether a new website layout increased average time on page compared to the old design. A teacher might test whether a tutoring program raised test scores above the school average. In each case, the question has a built-in direction: “Is it higher?”

The Rejection Region

Picture a standard bell curve. In a right-tailed test, you shade a small area on the far right side. That shaded area is the rejection region, and its size equals your significance level (alpha). If your test statistic lands inside that shaded zone, you reject the null hypothesis.

The boundary of the rejection region is called the critical value. For a significance level of 0.05, the critical z-score in a right-tailed test is 1.645. For a stricter significance level of 0.01, it’s 2.33. Any test statistic that exceeds the critical value falls into the rejection region, giving you enough evidence to say the mean is likely greater than the claimed value.

Notice the critical value for a right-tailed test at 0.05 is 1.645, not the 1.96 you might have memorized. That 1.96 belongs to two-tailed tests, where the 5% is split between both ends of the curve. In a right-tailed test, all 5% sits on one side.

Right-Tailed vs. Two-Tailed Tests

A two-tailed test asks “is the value different from k?” without specifying a direction. It splits alpha between both tails, putting 2.5% on each side when alpha is 0.05. A right-tailed test puts all of alpha in one tail. This makes the right-tailed test more powerful at detecting an effect in that specific direction, because the threshold for significance is lower on that side.

The tradeoff is that you completely ignore the other direction. If the true mean is actually much lower than k, a right-tailed test won’t catch it. You should only use a directional test when you have a genuine, pre-existing reason to look in one direction. Choosing a one-tailed test after peeking at the data to see which way it leans is considered bad practice, because it inflates your chances of a false positive.

How to Run the Test Step by Step

The process follows the same logic as any hypothesis test, with the direction baked into steps 1 and 4.

Step 1: State your hypotheses. Write H₀: μ = k and H₁: μ > k. The value of k comes from whatever claim you’re testing against.

Step 2: Choose your significance level. This is your alpha, commonly set at 0.05. It defines how much risk of a false positive you’re willing to accept.

Step 3: Calculate the test statistic. Using your sample data, compute a z-score or t-score that measures how far your sample mean is from k, in units of standard error. A positive test statistic means your sample mean is above k. The larger the positive value, the stronger the evidence for H₁.

Step 4: Make your decision. You can do this two ways. With the critical value approach, compare your test statistic to the critical value (1.645 for z at alpha = 0.05). If the test statistic is larger, reject H₀. With the p-value approach, find the probability of getting a test statistic as extreme or more extreme than yours, looking only to the right. If that p-value is less than or equal to alpha, reject H₀.

Both approaches always give the same answer. The p-value method gives you more information, because you can see exactly how strong the evidence is rather than just whether it crossed a threshold.

Reading the P-Value

In a right-tailed test, the p-value is the area under the curve to the right of your test statistic. It answers: “If the null hypothesis were true, how likely would I be to see a sample mean this high or higher?”

A small p-value (say 0.02 when alpha is 0.05) means your result would be very unlikely under the null hypothesis, so you reject H₀. A large p-value (say 0.34) means a sample mean like yours is perfectly ordinary even if H₀ is true, so you fail to reject it.

Keep in mind that failing to reject H₀ is not the same as proving H₀ is correct. It just means you didn’t find strong enough evidence of an increase. Your sample might have been too small, or the true difference might be too subtle to detect with your data.

Common Mistakes to Avoid

The most frequent error is choosing a right-tailed test after collecting data and noticing the sample mean is above k. The direction must be decided before you look at results, based on your research question. If you don’t have a strong directional prediction going in, use a two-tailed test.

Another mistake is treating the p-value as the probability that the null hypothesis is true. A p-value of 0.03 does not mean there’s a 3% chance H₀ is correct. It means there’s a 3% chance of seeing data this extreme if H₀ were correct. The American Statistical Association has emphasized that scientific conclusions should not rest on whether a p-value crosses a single threshold, and that p-values work best alongside other evidence like confidence intervals and effect sizes.

Finally, watch the language of your conclusion. If you reject H₀, you say there is sufficient evidence that the mean is greater than k. You don’t say you “proved” it. If you fail to reject H₀, you say there is insufficient evidence, not that the mean equals k.