How to Find P Value from T Statistic in Excel, R & More

To find a p-value from a t-statistic, you need two things: your t-value and your degrees of freedom. With those in hand, you can use an online calculator, a spreadsheet formula, a programming language, or a printed t-table to get your answer in seconds. The p-value represents the probability of seeing a t-statistic as extreme as yours (or more extreme) if the null hypothesis were true.

What You Need Before You Start

Every method for converting a t-statistic to a p-value requires the same two inputs. First, your t-statistic itself, which is the ratio of the difference you observed to the standard error of that difference. Second, your degrees of freedom, which depends on the type of t-test you ran:

  • One-sample or paired t-test: degrees of freedom = number of observations minus 1 (n − 1)
  • Independent two-sample t-test (equal variances): degrees of freedom = total observations minus 2 (n₁ + n₂ − 2)
  • Independent two-sample t-test (unequal variances): degrees of freedom are calculated using the Satterthwaite approximation, which adjusts for the difference in group variances. Your software will report this number for you.

You also need to know whether you’re running a one-tailed or two-tailed test, since this changes the p-value you get.

One-Tailed vs. Two-Tailed Tests

A two-tailed test checks whether your result is extreme in either direction. It splits your significance threshold in half, putting 0.025 in each tail when using an alpha of 0.05. A one-tailed test puts the entire 0.05 in one tail because you’re only interested in one direction (for example, whether a treatment increases a score, not just changes it).

Because the t-distribution is symmetric around zero, converting between the two is simple. A one-tailed p-value is exactly half the two-tailed p-value. So if your two-tailed p-value is 0.04, the corresponding one-tailed p-value is 0.02. Most software reports the two-tailed value by default.

Using Excel or Google Sheets

Excel has three built-in functions depending on your test type:

  • Two-tailed test: =T.DIST.2T(ABS(t), df) where you enter the absolute value of your t-statistic and your degrees of freedom.
  • Right-tailed (upper) test: =T.DIST.RT(t, df) gives you the probability in the right tail.
  • Left-tailed (lower) test: =T.DIST(t, df, TRUE) gives you the cumulative probability up to your t-value. This is the area to the left of your t-statistic.

For example, if your t-statistic is 2.45 with 30 degrees of freedom, typing =T.DIST.2T(2.45, 30) returns approximately 0.0205. That’s your two-tailed p-value. Google Sheets uses the same function names and syntax.

Using Python

In Python, the scipy.stats module handles this cleanly. The survival function (sf) gives the area in the right tail:

from scipy.stats import t
p_one_tail = t.sf(abs(t_statistic), df)
p_two_tail = p_one_tail * 2

The sf function (survival function) calculates 1 minus the cumulative distribution function, which is the probability of getting a value at least as large as your t-statistic. Multiplying by 2 gives you the two-tailed p-value. Using abs() on the t-statistic ensures this works regardless of whether your t-value is positive or negative.

Using R

R’s built-in pt() function calculates the cumulative probability from the t-distribution:

p_two_tail = 2 * pt(abs(t_statistic), df, lower.tail = FALSE)

Setting lower.tail = FALSE returns the probability to the right of your t-value. Multiplying by 2 converts it to a two-tailed p-value. For a one-tailed test, drop the multiplication. If your alternative hypothesis is that the mean is less than the null value and your t-statistic is negative, use pt(t_statistic, df, lower.tail = TRUE) instead.

Using a Graphing Calculator

On a TI-83 or TI-84, press [2nd] then [VARS] to open the DISTR menu. Select option 6, tcdf(, which expects three inputs: a lower bound, an upper bound, and degrees of freedom.

  • Left-tailed test: Enter tcdf(-99, t, df). Using -99 as the lower bound approximates negative infinity.
  • Right-tailed test: Enter tcdf(t, 99, df). Using 99 as the upper bound approximates positive infinity.
  • Two-tailed test: Run the right-tailed version using the absolute value of your t-statistic, then multiply the result by 2.

For instance, with a test statistic of −2.05 and 15 degrees of freedom, entering tcdf(-99, -2.05, 15) returns a p-value of about 0.029.

Using a Printed T-Table

A t-table won’t give you an exact p-value, but it narrows the range. Find your degrees of freedom in the leftmost column, then scan across that row. Your t-statistic will fall between two critical values. The p-value column headers above those values tell you the range your p-value falls in.

If your t-value is larger than the biggest critical value in your row, your p-value is smaller than the smallest probability listed, often less than 0.0005. If it falls between the columns for 0.025 and 0.01, you know your two-tailed p-value is somewhere in that range. This is less precise than software but works fine for quick checks, especially on exams.

A Worked Example

Suppose you’re testing whether the average GPA at a school differs from 3.0. You collect data from 15 students and calculate a t-statistic of 1.94. Your degrees of freedom are 15 − 1 = 14.

In Excel, you’d type =T.DIST.RT(1.94, 14) and get approximately 0.037. That’s the one-tailed p-value. For a two-tailed test, you’d use =T.DIST.2T(1.94, 14) to get approximately 0.074. The one-tailed result of 0.037 falls below the common alpha of 0.05, suggesting evidence against the null hypothesis. The two-tailed result of 0.074 does not, which illustrates why your choice of one-tailed vs. two-tailed matters.

How to Interpret the Result

Once you have your p-value, you compare it to your predetermined significance level (alpha). The most common threshold is 0.05, meaning you’re accepting a 5% chance of incorrectly rejecting the null hypothesis. Some fields use stricter thresholds: 0.01 in medical research or 0.001 in genetics, for instance.

A smaller standard error produces a larger t-statistic, which in turn produces a smaller p-value. This makes intuitive sense: when your data points cluster tightly around the mean, even a modest difference from the null value becomes harder to explain by chance alone. Larger sample sizes also shrink the standard error, which is why studies with more participants tend to produce more decisive p-values from the same effect size.

The p-value does not tell you the size or importance of the effect. A tiny, meaningless difference can produce a very small p-value if the sample is large enough. Always pair your p-value with the actual magnitude of the difference you observed.