Understanding the Null Hypothesis for Linear Regression

Linear regression is a technique we can use to understand the relationship between one or more predictor variables and a response variable .

If we only have one predictor variable and one response variable, we can use simple linear regression , which uses the following formula to estimate the relationship between the variables:

ŷ = β 0 + β 1 x

  • ŷ: The estimated response value.
  • β 0 : The average value of y when x is zero.
  • β 1 : The average change in y associated with a one unit increase in x.
  • x: The value of the predictor variable.

Simple linear regression uses the following null and alternative hypotheses:

  • H 0 : β 1 = 0
  • H A : β 1 ≠ 0

The null hypothesis states that the coefficient β 1 is equal to zero. In other words, there is no statistically significant relationship between the predictor variable, x, and the response variable, y.

The alternative hypothesis states that β 1 is not equal to zero. In other words, there is a statistically significant relationship between x and y.

If we have multiple predictor variables and one response variable, we can use multiple linear regression , which uses the following formula to estimate the relationship between the variables:

ŷ = β 0 + β 1 x 1 + β 2 x 2 + … + β k x k

  • β 0 : The average value of y when all predictor variables are equal to zero.
  • β i : The average change in y associated with a one unit increase in x i .
  • x i : The value of the predictor variable x i .

Multiple linear regression uses the following null and alternative hypotheses:

  • H 0 : β 1 = β 2 = … = β k = 0
  • H A : β 1 = β 2 = … = β k ≠ 0

The null hypothesis states that all coefficients in the model are equal to zero. In other words, none of the predictor variables have a statistically significant relationship with the response variable, y.

The alternative hypothesis states that not every coefficient is simultaneously equal to zero.

The following examples show how to decide to reject or fail to reject the null hypothesis in both simple linear regression and multiple linear regression models.

Example 1: Simple Linear Regression

Suppose a professor would like to use the number of hours studied to predict the exam score that students will receive in his class. He collects data for 20 students and fits a simple linear regression model.

The following screenshot shows the output of the regression model:

Output of simple linear regression in Excel

The fitted simple linear regression model is:

Exam Score = 67.1617 + 5.2503*(hours studied)

To determine if there is a statistically significant relationship between hours studied and exam score, we need to analyze the overall F value of the model and the corresponding p-value:

  • Overall F-Value:  47.9952
  • P-value:  0.000

Since this p-value is less than .05, we can reject the null hypothesis. In other words, there is a statistically significant relationship between hours studied and exam score received.

Example 2: Multiple Linear Regression

Suppose a professor would like to use the number of hours studied and the number of prep exams taken to predict the exam score that students will receive in his class. He collects data for 20 students and fits a multiple linear regression model.

Multiple linear regression output in Excel

The fitted multiple linear regression model is:

Exam Score = 67.67 + 5.56*(hours studied) – 0.60*(prep exams taken)

To determine if there is a jointly statistically significant relationship between the two predictor variables and the response variable, we need to analyze the overall F value of the model and the corresponding p-value:

  • Overall F-Value:  23.46
  • P-value:  0.00

Since this p-value is less than .05, we can reject the null hypothesis. In other words, hours studied and prep exams taken have a jointly statistically significant relationship with exam score.

Note: Although the p-value for prep exams taken (p = 0.52) is not significant, prep exams combined with hours studied has a significant relationship with exam score.

Additional Resources

Understanding the F-Test of Overall Significance in Regression How to Read and Interpret a Regression Table How to Report Regression Results How to Perform Simple Linear Regression in Excel How to Perform Multiple Linear Regression in Excel

The Complete Guide: How to Report Regression Results

R vs. r-squared: what’s the difference, related posts, how to normalize data between -1 and 1, how to interpret f-values in a two-way anova, how to create a vector of ones in..., vba: how to check if string contains another..., how to determine if a probability distribution is..., what is a symmetric histogram (definition & examples), how to find the mode of a histogram..., how to find quartiles in even and odd..., how to calculate sxy in statistics (with example), how to calculate sxx in statistics (with example).

Linear regression - Hypothesis testing

by Marco Taboga , PhD

This lecture discusses how to perform tests of hypotheses about the coefficients of a linear regression model estimated by ordinary least squares (OLS).

Table of contents

Normal vs non-normal model

The linear regression model, matrix notation, tests of hypothesis in the normal linear regression model, test of a restriction on a single coefficient (t test), test of a set of linear restrictions (f test), tests based on maximum likelihood procedures (wald, lagrange multiplier, likelihood ratio), tests of hypothesis when the ols estimator is asymptotically normal, test of a restriction on a single coefficient (z test), test of a set of linear restrictions (chi-square test), learn more about regression analysis.

The lecture is divided in two parts:

in the first part, we discuss hypothesis testing in the normal linear regression model , in which the OLS estimator of the coefficients has a normal distribution conditional on the matrix of regressors;

in the second part, we show how to carry out hypothesis tests in linear regression analyses where the hypothesis of normality holds only in large samples (i.e., the OLS estimator can be proved to be asymptotically normal).

How to choose which test to carry out after estimating a linear regression model.

We also denote:

We now explain how to derive tests about the coefficients of the normal linear regression model.

It can be proved (see the lecture about the normal linear regression model ) that the assumption of conditional normality implies that:

How the acceptance region is determined depends not only on the desired size of the test , but also on whether the test is:

one-tailed (only one of the two things, i.e., either smaller or larger, is possible).

For more details on how to determine the acceptance region, see the glossary entry on critical values .

[eq28]

The F test is one-tailed .

A critical value in the right tail of the F distribution is chosen so as to achieve the desired size of the test.

Then, the null hypothesis is rejected if the F statistics is larger than the critical value.

In this section we explain how to perform hypothesis tests about the coefficients of a linear regression model when the OLS estimator is asymptotically normal.

As we have shown in the lecture on the properties of the OLS estimator , in several cases (i.e., under different sets of assumptions) it can be proved that:

These two properties are used to derive the asymptotic distribution of the test statistics used in hypothesis testing.

The test can be either one-tailed or two-tailed . The same comments made for the t-test apply here.

[eq50]

Like the F test, also the Chi-square test is usually one-tailed .

The desired size of the test is achieved by appropriately choosing a critical value in the right tail of the Chi-square distribution.

The null is rejected if the Chi-square statistics is larger than the critical value.

Want to learn more about regression analysis? Here are some suggestions:

R squared of a linear regression ;

Gauss-Markov theorem ;

Generalized Least Squares ;

Multicollinearity ;

Dummy variables ;

Selection of linear regression models

Partitioned regression ;

Ridge regression .

How to cite

Please cite as:

Taboga, Marco (2021). "Linear regression - Hypothesis testing", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/linear-regression-hypothesis-testing.

Most of the learning materials found on this website are now available in a traditional textbook format.

  • F distribution
  • Beta distribution
  • Conditional probability
  • Central Limit Theorem
  • Binomial distribution
  • Mean square convergence
  • Delta method
  • Almost sure convergence
  • Mathematical tools
  • Fundamentals of probability
  • Probability distributions
  • Asymptotic theory
  • Fundamentals of statistics
  • About Statlect
  • Cookies, privacy and terms of use
  • Loss function
  • Almost sure
  • Type I error
  • Precision matrix
  • Integrable variable
  • To enhance your privacy,
  • we removed the social buttons,
  • but don't forget to share .

logo

Simple linear regression

Simple linear regression #.

Fig. 9 Simple linear regression #

Errors: \(\varepsilon_i \sim N(0,\sigma^2)\quad \text{i.i.d.}\)

Fit: the estimates \(\hat\beta_0\) and \(\hat\beta_1\) are chosen to minimize the (training) residual sum of squares (RSS):

Sample code: advertising data #

Estimates \(\hat\beta_0\) and \(\hat\beta_1\) #.

A little calculus shows that the minimizers of the RSS are:

Assessing the accuracy of \(\hat \beta_0\) and \(\hat\beta_1\) #

Fig. 10 How variable is the regression line? #

Based on our model #

The Standard Errors for the parameters are:

95% confidence intervals:

Hypothesis test #

Null hypothesis \(H_0\) : There is no relationship between \(X\) and \(Y\) .

Alternative hypothesis \(H_a\) : There is some relationship between \(X\) and \(Y\) .

Based on our model: this translates to

\(H_0\) : \(\beta_1=0\) .

\(H_a\) : \(\beta_1\neq 0\) .

Test statistic:

Under the null hypothesis, this has a \(t\) -distribution with \(n-2\) degrees of freedom.

Sample output: advertising data #

Interpreting the hypothesis test #.

If we reject the null hypothesis, can we assume there is an exact linear relationship?

No. A quadratic relationship may be a better fit, for example. This test assumes the simple linear regression model is correct which precludes a quadratic relationship.

If we don’t reject the null hypothesis, can we assume there is no relationship between \(X\) and \(Y\) ?

No. This test is based on the model we posited above and is only powerful against certain monotone alternatives. There could be more complex non-linear relationships.

  • Prompt Library
  • DS/AI Trends
  • Stats Tools
  • Interview Questions
  • Generative AI
  • Machine Learning
  • Deep Learning

Linear regression hypothesis testing: Concepts, Examples

Simple linear regression model

In relation to machine learning , linear regression is defined as a predictive modeling technique that allows us to build a model which can help predict continuous response variables as a function of a linear combination of explanatory or predictor variables. While training linear regression models, we need to rely on hypothesis testing in relation to determining the relationship between the response and predictor variables. In the case of the linear regression model, two types of hypothesis testing are done. They are T-tests and F-tests . In other words, there are two types of statistics that are used to assess whether linear regression models exist representing response and predictor variables. They are t-statistics and f-statistics. As data scientists , it is of utmost importance to determine if linear regression is the correct choice of model for our particular problem and this can be done by performing hypothesis testing related to linear regression response and predictor variables. Many times, it is found that these concepts are not very clear with a lot many data scientists. In this blog post, we will discuss linear regression and hypothesis testing related to t-statistics and f-statistics . We will also provide an example to help illustrate how these concepts work.

Table of Contents

What are linear regression models?

A linear regression model can be defined as the function approximation that represents a continuous response variable as a function of one or more predictor variables. While building a linear regression model, the goal is to identify a linear equation that best predicts or models the relationship between the response or dependent variable and one or more predictor or independent variables.

There are two different kinds of linear regression models. They are as follows:

  • Simple or Univariate linear regression models : These are linear regression models that are used to build a linear relationship between one response or dependent variable and one predictor or independent variable. The form of the equation that represents a simple linear regression model is Y=mX+b, where m is the coefficients of the predictor variable and b is bias. When considering the linear regression line, m represents the slope and b represents the intercept.
  • Multiple or Multi-variate linear regression models : These are linear regression models that are used to build a linear relationship between one response or dependent variable and more than one predictor or independent variable. The form of the equation that represents a multiple linear regression model is Y=b0+b1X1+ b2X2 + … + bnXn, where bi represents the coefficients of the ith predictor variable. In this type of linear regression model, each predictor variable has its own coefficient that is used to calculate the predicted value of the response variable.

While training linear regression models, the requirement is to determine the coefficients which can result in the best-fitted linear regression line. The learning algorithm used to find the most appropriate coefficients is known as least squares regression . In the least-squares regression method, the coefficients are calculated using the least-squares error function. The main objective of this method is to minimize or reduce the sum of squared residuals between actual and predicted response values. The sum of squared residuals is also called the residual sum of squares (RSS). The outcome of executing the least-squares regression method is coefficients that minimize the linear regression cost function .

The residual e of the ith observation is represented as the following where [latex]Y_i[/latex] is the ith observation and [latex]\hat{Y_i}[/latex] is the prediction for ith observation or the value of response variable for ith observation.

[latex]e_i = Y_i – \hat{Y_i}[/latex]

The residual sum of squares can be represented as the following:

[latex]RSS = e_1^2 + e_2^2 + e_3^2 + … + e_n^2[/latex]

The least-squares method represents the algorithm that minimizes the above term, RSS.

Once the coefficients are determined, can it be claimed that these coefficients are the most appropriate ones for linear regression? The answer is no. After all, the coefficients are only the estimates and thus, there will be standard errors associated with each of the coefficients.  Recall that the standard error is used to calculate the confidence interval in which the mean value of the population parameter would exist. In other words, it represents the error of estimating a population parameter based on the sample data. The value of the standard error is calculated as the standard deviation of the sample divided by the square root of the sample size. The formula below represents the standard error of a mean.

[latex]SE(\mu) = \frac{\sigma}{\sqrt(N)}[/latex]

Thus, without analyzing aspects such as the standard error associated with the coefficients, it cannot be claimed that the linear regression coefficients are the most suitable ones without performing hypothesis testing. This is where hypothesis testing is needed . Before we get into why we need hypothesis testing with the linear regression model, let’s briefly learn about what is hypothesis testing?

Train a Multiple Linear Regression Model using R

Before getting into understanding the hypothesis testing concepts in relation to the linear regression model, let’s train a multi-variate or multiple linear regression model and print the summary output of the model which will be referred to, in the next section. 

The data used for creating a multi-linear regression model is BostonHousing which can be loaded in RStudioby installing mlbench package. The code is shown below:

install.packages(“mlbench”) library(mlbench) data(“BostonHousing”)

Once the data is loaded, the code shown below can be used to create the linear regression model.

attach(BostonHousing) BostonHousing.lm <- lm(log(medv) ~ crim + chas + rad + lstat) summary(BostonHousing.lm)

Executing the above command will result in the creation of a linear regression model with the response variable as medv and predictor variables as crim, chas, rad, and lstat. The following represents the details related to the response and predictor variables:

  • log(medv) : Log of the median value of owner-occupied homes in USD 1000’s
  • crim : Per capita crime rate by town
  • chas : Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
  • rad : Index of accessibility to radial highways
  • lstat : Percentage of the lower status of the population

The following will be the output of the summary command that prints the details relating to the model including hypothesis testing details for coefficients (t-statistics) and the model as a whole (f-statistics) 

linear regression model summary table r.png

Hypothesis tests & Linear Regression Models

Hypothesis tests are the statistical procedure that is used to test a claim or assumption about the underlying distribution of a population based on the sample data. Here are key steps of doing hypothesis tests with linear regression models:

  • Hypothesis formulation for T-tests: In the case of linear regression, the claim is made that there exists a relationship between response and predictor variables, and the claim is represented using the non-zero value of coefficients of predictor variables in the linear equation or regression model. This is formulated as an alternate hypothesis. Thus, the null hypothesis is set that there is no relationship between response and the predictor variables . Hence, the coefficients related to each of the predictor variables is equal to zero (0). So, if the linear regression model is Y = a0 + a1x1 + a2x2 + a3x3, then the null hypothesis for each test states that a1 = 0, a2 = 0, a3 = 0 etc. For all the predictor variables, individual hypothesis testing is done to determine whether the relationship between response and that particular predictor variable is statistically significant based on the sample data used for training the model. Thus, if there are, say, 5 features, there will be five hypothesis tests and each will have an associated null and alternate hypothesis.
  • Hypothesis formulation for F-test : In addition, there is a hypothesis test done around the claim that there is a linear regression model representing the response variable and all the predictor variables. The null hypothesis is that the linear regression model does not exist . This essentially means that the value of all the coefficients is equal to zero. So, if the linear regression model is Y = a0 + a1x1 + a2x2 + a3x3, then the null hypothesis states that a1 = a2 = a3 = 0.
  • F-statistics for testing hypothesis for linear regression model : F-test is used to test the null hypothesis that a linear regression model does not exist, representing the relationship between the response variable y and the predictor variables x1, x2, x3, x4 and x5. The null hypothesis can also be represented as x1 = x2 = x3 = x4 = x5 = 0. F-statistics is calculated as a function of sum of squares residuals for restricted regression (representing linear regression model with only intercept or bias and all the values of coefficients as zero) and sum of squares residuals for unrestricted regression (representing linear regression model). In the above diagram, note the value of f-statistics as 15.66 against the degrees of freedom as 5 and 194. 
  • Evaluate t-statistics against the critical value/region : After calculating the value of t-statistics for each coefficient, it is now time to make a decision about whether to accept or reject the null hypothesis. In order for this decision to be made, one needs to set a significance level, which is also known as the alpha level. The significance level of 0.05 is usually set for rejecting the null hypothesis or otherwise. If the value of t-statistics fall in the critical region, the null hypothesis is rejected. Or, if the p-value comes out to be less than 0.05, the null hypothesis is rejected.
  • Evaluate f-statistics against the critical value/region : The value of F-statistics and the p-value is evaluated for testing the null hypothesis that the linear regression model representing response and predictor variables does not exist. If the value of f-statistics is more than the critical value at the level of significance as 0.05, the null hypothesis is rejected. This means that the linear model exists with at least one valid coefficients. 
  • Draw conclusions : The final step of hypothesis testing is to draw a conclusion by interpreting the results in terms of the original claim or hypothesis. If the null hypothesis of one or more predictor variables is rejected, it represents the fact that the relationship between the response and the predictor variable is not statistically significant based on the evidence or the sample data we used for training the model. Similarly, if the f-statistics value lies in the critical region and the value of the p-value is less than the alpha value usually set as 0.05, one can say that there exists a linear regression model.

Why hypothesis tests for linear regression models?

The reasons why we need to do hypothesis tests in case of a linear regression model are following:

  • By creating the model, we are establishing a new truth (claims) about the relationship between response or dependent variable with one or more predictor or independent variables. In order to justify the truth, there are needed one or more tests. These tests can be termed as an act of testing the claim (or new truth) or in other words, hypothesis tests.
  • One kind of test is required to test the relationship between response and each of the predictor variables (hence, T-tests)
  • Another kind of test is required to test the linear regression model representation as a whole. This is called F-test.

While training linear regression models, hypothesis testing is done to determine whether the relationship between the response and each of the predictor variables is statistically significant or otherwise. The coefficients related to each of the predictor variables is determined. Then, individual hypothesis tests are done to determine whether the relationship between response and that particular predictor variable is statistically significant based on the sample data used for training the model. If at least one of the null hypotheses is rejected, it represents the fact that there exists no relationship between response and that particular predictor variable. T-statistics is used for performing the hypothesis testing because the standard deviation of the sampling distribution is unknown. The value of t-statistics is compared with the critical value from the t-distribution table in order to make a decision about whether to accept or reject the null hypothesis regarding the relationship between the response and predictor variables. If the value falls in the critical region, then the null hypothesis is rejected which means that there is no relationship between response and that predictor variable. In addition to T-tests, F-test is performed to test the null hypothesis that the linear regression model does not exist and that the value of all the coefficients is zero (0). Learn more about the linear regression and t-test in this blog – Linear regression t-test: formula, example .

Recent Posts

Ajitesh Kumar

  • ROC Curve & AUC Explained with Python Examples - September 8, 2024
  • Accuracy, Precision, Recall & F1-Score – Python Examples - August 28, 2024
  • Logistic Regression in Machine Learning: Python Example - August 26, 2024

Ajitesh Kumar

One response.

Very informative

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Search for:

ChatGPT Prompts (250+)

  • Generate Design Ideas for App
  • Expand Feature Set of App
  • Create a User Journey Map for App
  • Generate Visual Design Ideas for App
  • Generate a List of Competitors for App
  • ROC Curve & AUC Explained with Python Examples
  • Accuracy, Precision, Recall & F1-Score – Python Examples
  • Logistic Regression in Machine Learning: Python Example
  • Reducing Overfitting vs Models Complexity: Machine Learning
  • Model Parallelism vs Data Parallelism: Examples

Data Science / AI Trends

  • • Prepend any arxiv.org link with talk2 to load the paper into a responsive chat application
  • • Custom LLM and AI Agents (RAG) On Structured + Unstructured Data - AI Brain For Your Organization
  • • Guides, papers, lecture, notebooks and resources for prompt engineering
  • • Common tricks to make LLMs efficient and stable
  • • Machine learning in finance

Free Online Tools

  • Create Scatter Plots Online for your Excel Data
  • Histogram / Frequency Distribution Creation Tool
  • Online Pie Chart Maker Tool
  • Z-test vs T-test Decision Tool
  • Independent samples t-test calculator

Recent Comments

I found it very helpful. However the differences are not too understandable for me

Very Nice Explaination. Thankyiu very much,

in your case E respresent Member or Oraganization which include on e or more peers?

Such a informative post. Keep it up

Thank you....for your support. you given a good solution for me.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Null Hypothesis: Definition, Rejecting & Examples

By Jim Frost 6 Comments

What is a Null Hypothesis?

The null hypothesis in statistics states that there is no difference between groups or no relationship between variables. It is one of two mutually exclusive hypotheses about a population in a hypothesis test.

Photograph of Rodin's statue, The Thinker who is pondering the null hypothesis.

  • Null Hypothesis H 0 : No effect exists in the population.
  • Alternative Hypothesis H A : The effect exists in the population.

In every study or experiment, researchers assess an effect or relationship. This effect can be the effectiveness of a new drug, building material, or other intervention that has benefits. There is a benefit or connection that the researchers hope to identify. Unfortunately, no effect may exist. In statistics, we call this lack of an effect the null hypothesis. Researchers assume that this notion of no effect is correct until they have enough evidence to suggest otherwise, similar to how a trial presumes innocence.

In this context, the analysts don’t necessarily believe the null hypothesis is correct. In fact, they typically want to reject it because that leads to more exciting finds about an effect or relationship. The new vaccine works!

You can think of it as the default theory that requires sufficiently strong evidence to reject. Like a prosecutor, researchers must collect sufficient evidence to overturn the presumption of no effect. Investigators must work hard to set up a study and a data collection system to obtain evidence that can reject the null hypothesis.

Related post : What is an Effect in Statistics?

Null Hypothesis Examples

Null hypotheses start as research questions that the investigator rephrases as a statement indicating there is no effect or relationship.

Does the vaccine prevent infections? The vaccine does not affect the infection rate.
Does the new additive increase product strength? The additive does not affect mean product strength.
Does the exercise intervention increase bone mineral density? The intervention does not affect bone mineral density.
As screen time increases, does test performance decrease? There is no relationship between screen time and test performance.

After reading these examples, you might think they’re a bit boring and pointless. However, the key is to remember that the null hypothesis defines the condition that the researchers need to discredit before suggesting an effect exists.

Let’s see how you reject the null hypothesis and get to those more exciting findings!

When to Reject the Null Hypothesis

So, you want to reject the null hypothesis, but how and when can you do that? To start, you’ll need to perform a statistical test on your data. The following is an overview of performing a study that uses a hypothesis test.

The first step is to devise a research question and the appropriate null hypothesis. After that, the investigators need to formulate an experimental design and data collection procedures that will allow them to gather data that can answer the research question. Then they collect the data. For more information about designing a scientific study that uses statistics, read my post 5 Steps for Conducting Studies with Statistics .

After data collection is complete, statistics and hypothesis testing enter the picture. Hypothesis testing takes your sample data and evaluates how consistent they are with the null hypothesis. The p-value is a crucial part of the statistical results because it quantifies how strongly the sample data contradict the null hypothesis.

When the sample data provide sufficient evidence, you can reject the null hypothesis. In a hypothesis test, this process involves comparing the p-value to your significance level .

Rejecting the Null Hypothesis

Reject the null hypothesis when the p-value is less than or equal to your significance level. Your sample data favor the alternative hypothesis, which suggests that the effect exists in the population. For a mnemonic device, remember—when the p-value is low, the null must go!

When you can reject the null hypothesis, your results are statistically significant. Learn more about Statistical Significance: Definition & Meaning .

Failing to Reject the Null Hypothesis

Conversely, when the p-value is greater than your significance level, you fail to reject the null hypothesis. The sample data provides insufficient data to conclude that the effect exists in the population. When the p-value is high, the null must fly!

Note that failing to reject the null is not the same as proving it. For more information about the difference, read my post about Failing to Reject the Null .

That’s a very general look at the process. But I hope you can see how the path to more exciting findings depends on being able to rule out the less exciting null hypothesis that states there’s nothing to see here!

Let’s move on to learning how to write the null hypothesis for different types of effects, relationships, and tests.

Related posts : How Hypothesis Tests Work and Interpreting P-values

How to Write a Null Hypothesis

The null hypothesis varies by the type of statistic and hypothesis test. Remember that inferential statistics use samples to draw conclusions about populations. Consequently, when you write a null hypothesis, it must make a claim about the relevant population parameter . Further, that claim usually indicates that the effect does not exist in the population. Below are typical examples of writing a null hypothesis for various parameters and hypothesis tests.

Related posts : Descriptive vs. Inferential Statistics and Populations, Parameters, and Samples in Inferential Statistics

Group Means

T-tests and ANOVA assess the differences between group means. For these tests, the null hypothesis states that there is no difference between group means in the population. In other words, the experimental conditions that define the groups do not affect the mean outcome. Mu (µ) is the population parameter for the mean, and you’ll need to include it in the statement for this type of study.

For example, an experiment compares the mean bone density changes for a new osteoporosis medication. The control group does not receive the medicine, while the treatment group does. The null states that the mean bone density changes for the control and treatment groups are equal.

  • Null Hypothesis H 0 : Group means are equal in the population: µ 1 = µ 2 , or µ 1 – µ 2 = 0
  • Alternative Hypothesis H A : Group means are not equal in the population: µ 1 ≠ µ 2 , or µ 1 – µ 2 ≠ 0.

Group Proportions

Proportions tests assess the differences between group proportions. For these tests, the null hypothesis states that there is no difference between group proportions. Again, the experimental conditions did not affect the proportion of events in the groups. P is the population proportion parameter that you’ll need to include.

For example, a vaccine experiment compares the infection rate in the treatment group to the control group. The treatment group receives the vaccine, while the control group does not. The null states that the infection rates for the control and treatment groups are equal.

  • Null Hypothesis H 0 : Group proportions are equal in the population: p 1 = p 2 .
  • Alternative Hypothesis H A : Group proportions are not equal in the population: p 1 ≠ p 2 .

Correlation and Regression Coefficients

Some studies assess the relationship between two continuous variables rather than differences between groups.

In these studies, analysts often use either correlation or regression analysis . For these tests, the null states that there is no relationship between the variables. Specifically, it says that the correlation or regression coefficient is zero. As one variable increases, there is no tendency for the other variable to increase or decrease. Rho (ρ) is the population correlation parameter and beta (β) is the regression coefficient parameter.

For example, a study assesses the relationship between screen time and test performance. The null states that there is no correlation between this pair of variables. As screen time increases, test performance does not tend to increase or decrease.

  • Null Hypothesis H 0 : The correlation in the population is zero: ρ = 0.
  • Alternative Hypothesis H A : The correlation in the population is not zero: ρ ≠ 0.

For all these cases, the analysts define the hypotheses before the study. After collecting the data, they perform a hypothesis test to determine whether they can reject the null hypothesis.

The preceding examples are all for two-tailed hypothesis tests. To learn about one-tailed tests and how to write a null hypothesis for them, read my post One-Tailed vs. Two-Tailed Tests .

Related post : Understanding Correlation

Neyman, J; Pearson, E. S. (January 1, 1933).  On the Problem of the most Efficient Tests of Statistical Hypotheses .  Philosophical Transactions of the Royal Society A .  231  (694–706): 289–337.

Share this:

null hypothesis linear regression

Reader Interactions

' src=

January 11, 2024 at 2:57 pm

Thanks for the reply.

January 10, 2024 at 1:23 pm

Hi Jim, In your comment you state that equivalence test null and alternate hypotheses are reversed. For hypothesis tests of data fits to a probability distribution, the null hypothesis is that the probability distribution fits the data. Is this correct?

' src=

January 10, 2024 at 2:15 pm

Those two separate things, equivalence testing and normality tests. But, yes, you’re correct for both.

Hypotheses are switched for equivalence testing. You need to “work” (i.e., collect a large sample of good quality data) to be able to reject the null that the groups are different to be able to conclude they’re the same.

With typical hypothesis tests, if you have low quality data and a low sample size, you’ll fail to reject the null that they’re the same, concluding they’re equivalent. But that’s more a statement about the low quality and small sample size than anything to do with the groups being equal.

So, equivalence testing make you work to obtain a finding that the groups are the same (at least within some amount you define as a trivial difference).

For normality testing, and other distribution tests, the null states that the data follow the distribution (normal or whatever). If you reject the null, you have sufficient evidence to conclude that your sample data don’t follow the probability distribution. That’s a rare case where you hope to fail to reject the null. And it suffers from the problem I describe above where you might fail to reject the null simply because you have a small sample size. In that case, you’d conclude the data follow the probability distribution but it’s more that you don’t have enough data for the test to register the deviation. In this scenario, if you had a larger sample size, you’d reject the null and conclude it doesn’t follow that distribution.

I don’t know of any equivalence testing type approach for distribution fit tests where you’d need to work to show the data follow a distribution, although I haven’t looked for one either!

' src=

February 20, 2022 at 9:26 pm

Is a null hypothesis regularly (always) stated in the negative? “there is no” or “does not”

February 23, 2022 at 9:21 pm

Typically, the null hypothesis includes an equal sign. The null hypothesis states that the population parameter equals a particular value. That value is usually one that represents no effect. In the case of a one-sided hypothesis test, the null still contains an equal sign but it’s “greater than or equal to” or “less than or equal to.” If you wanted to translate the null hypothesis from its native mathematical expression, you could use the expression “there is no effect.” But the mathematical form more specifically states what it’s testing.

It’s the alternative hypothesis that typically contains does not equal.

There are some exceptions. For example, in an equivalence test where the researchers want to show that two things are equal, the null hypothesis states that they’re not equal.

In short, the null hypothesis states the condition that the researchers hope to reject. They need to work hard to set up an experiment and data collection that’ll gather enough evidence to be able to reject the null condition.

' src=

February 15, 2022 at 9:32 am

Dear sir I always read your notes on Research methods.. Kindly tell is there any available Book on all these..wonderfull Urgent

Comments and Questions Cancel reply

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Null and Alternative Hypotheses | Definitions & Examples

Null & Alternative Hypotheses | Definitions, Templates & Examples

Published on May 6, 2022 by Shaun Turney . Revised on June 22, 2023.

The null and alternative hypotheses are two competing claims that researchers weigh evidence for and against using a statistical test :

  • Null hypothesis ( H 0 ): There’s no effect in the population .
  • Alternative hypothesis ( H a or H 1 ) : There’s an effect in the population.

Table of contents

Answering your research question with hypotheses, what is a null hypothesis, what is an alternative hypothesis, similarities and differences between null and alternative hypotheses, how to write null and alternative hypotheses, other interesting articles, frequently asked questions.

The null and alternative hypotheses offer competing answers to your research question . When the research question asks “Does the independent variable affect the dependent variable?”:

  • The null hypothesis ( H 0 ) answers “No, there’s no effect in the population.”
  • The alternative hypothesis ( H a ) answers “Yes, there is an effect in the population.”

The null and alternative are always claims about the population. That’s because the goal of hypothesis testing is to make inferences about a population based on a sample . Often, we infer whether there’s an effect in the population by looking at differences between groups or relationships between variables in the sample. It’s critical for your research to write strong hypotheses .

You can use a statistical test to decide whether the evidence favors the null or alternative hypothesis. Each type of statistical test comes with a specific way of phrasing the null and alternative hypothesis. However, the hypotheses can also be phrased in a general way that applies to any test.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

null hypothesis linear regression

The null hypothesis is the claim that there’s no effect in the population.

If the sample provides enough evidence against the claim that there’s no effect in the population ( p ≤ α), then we can reject the null hypothesis . Otherwise, we fail to reject the null hypothesis.

Although “fail to reject” may sound awkward, it’s the only wording that statisticians accept . Be careful not to say you “prove” or “accept” the null hypothesis.

Null hypotheses often include phrases such as “no effect,” “no difference,” or “no relationship.” When written in mathematical terms, they always include an equality (usually =, but sometimes ≥ or ≤).

You can never know with complete certainty whether there is an effect in the population. Some percentage of the time, your inference about the population will be incorrect. When you incorrectly reject the null hypothesis, it’s called a type I error . When you incorrectly fail to reject it, it’s a type II error.

Examples of null hypotheses

The table below gives examples of research questions and null hypotheses. There’s always more than one way to answer a research question, but these null hypotheses can help you get started.

( )
Does tooth flossing affect the number of cavities? Tooth flossing has on the number of cavities. test:

The mean number of cavities per person does not differ between the flossing group (µ ) and the non-flossing group (µ ) in the population; µ = µ .

Does the amount of text highlighted in the textbook affect exam scores? The amount of text highlighted in the textbook has on exam scores. :

There is no relationship between the amount of text highlighted and exam scores in the population; β = 0.

Does daily meditation decrease the incidence of depression? Daily meditation the incidence of depression.* test:

The proportion of people with depression in the daily-meditation group ( ) is greater than or equal to the no-meditation group ( ) in the population; ≥ .

*Note that some researchers prefer to always write the null hypothesis in terms of “no effect” and “=”. It would be fine to say that daily meditation has no effect on the incidence of depression and p 1 = p 2 .

The alternative hypothesis ( H a ) is the other answer to your research question . It claims that there’s an effect in the population.

Often, your alternative hypothesis is the same as your research hypothesis. In other words, it’s the claim that you expect or hope will be true.

The alternative hypothesis is the complement to the null hypothesis. Null and alternative hypotheses are exhaustive, meaning that together they cover every possible outcome. They are also mutually exclusive, meaning that only one can be true at a time.

Alternative hypotheses often include phrases such as “an effect,” “a difference,” or “a relationship.” When alternative hypotheses are written in mathematical terms, they always include an inequality (usually ≠, but sometimes < or >). As with null hypotheses, there are many acceptable ways to phrase an alternative hypothesis.

Examples of alternative hypotheses

The table below gives examples of research questions and alternative hypotheses to help you get started with formulating your own.

Does tooth flossing affect the number of cavities? Tooth flossing has an on the number of cavities. test:

The mean number of cavities per person differs between the flossing group (µ ) and the non-flossing group (µ ) in the population; µ ≠ µ .

Does the amount of text highlighted in a textbook affect exam scores? The amount of text highlighted in the textbook has an on exam scores. :

There is a relationship between the amount of text highlighted and exam scores in the population; β ≠ 0.

Does daily meditation decrease the incidence of depression? Daily meditation the incidence of depression. test:

The proportion of people with depression in the daily-meditation group ( ) is less than the no-meditation group ( ) in the population; < .

Null and alternative hypotheses are similar in some ways:

  • They’re both answers to the research question.
  • They both make claims about the population.
  • They’re both evaluated by statistical tests.

However, there are important differences between the two types of hypotheses, summarized in the following table.

A claim that there is in the population. A claim that there is in the population.

Equality symbol (=, ≥, or ≤) Inequality symbol (≠, <, or >)
Rejected Supported
Failed to reject Not supported

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

To help you write your hypotheses, you can use the template sentences below. If you know which statistical test you’re going to use, you can use the test-specific template sentences. Otherwise, you can use the general template sentences.

General template sentences

The only thing you need to know to use these general template sentences are your dependent and independent variables. To write your research question, null hypothesis, and alternative hypothesis, fill in the following sentences with your variables:

Does independent variable affect dependent variable ?

  • Null hypothesis ( H 0 ): Independent variable does not affect dependent variable.
  • Alternative hypothesis ( H a ): Independent variable affects dependent variable.

Test-specific template sentences

Once you know the statistical test you’ll be using, you can write your hypotheses in a more precise and mathematical way specific to the test you chose. The table below provides template sentences for common statistical tests.

( )
test 

with two groups

The mean dependent variable does not differ between group 1 (µ ) and group 2 (µ ) in the population; µ = µ . The mean dependent variable differs between group 1 (µ ) and group 2 (µ ) in the population; µ ≠ µ .
with three groups The mean dependent variable does not differ between group 1 (µ ), group 2 (µ ), and group 3 (µ ) in the population; µ = µ = µ . The mean dependent variable of group 1 (µ ), group 2 (µ ), and group 3 (µ ) are not all equal in the population.
There is no correlation between independent variable and dependent variable in the population; ρ = 0. There is a correlation between independent variable and dependent variable in the population; ρ ≠ 0.
There is no relationship between independent variable and dependent variable in the population; β = 0. There is a relationship between independent variable and dependent variable in the population; β ≠ 0.
Two-proportions test The dependent variable expressed as a proportion does not differ between group 1 ( ) and group 2 ( ) in the population; = . The dependent variable expressed as a proportion differs between group 1 ( ) and group 2 ( ) in the population; ≠ .

Note: The template sentences above assume that you’re performing one-tailed tests . One-tailed tests are appropriate for most studies.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).

The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (“ x affects y because …”).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses . In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, June 22). Null & Alternative Hypotheses | Definitions, Templates & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/statistics/null-and-alternative-hypotheses/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, inferential statistics | an easy introduction & examples, hypothesis testing | a step-by-step guide with easy examples, type i & type ii errors | differences, examples, visualizations, what is your plagiarism score.

  • ggplot2 Short Tutorial
  • ggplot2 Tutorial 1 - Intro
  • ggplot2 Tutorial 2 - Theme
  • ggplot2 Tutorial 3 - Masterlist
  • ggplot2 Quickref
  • Foundations

Linear Regression

  • Statistical Tests
  • Missing Value Treatment
  • Outlier Analysis
  • Feature Selection
  • Model Selection
  • Logistic Regression
  • Advanced Linear Regression
  • Advanced Regression Models
  • Time Series
  • Time Series Analysis
  • Time Series Forecasting
  • More Time Series Forecasting
  • High Performance Computing
  • Parallel computing
  • Strategies to Speedup R code
  • Useful Techniques
  • Association Mining
  • Multi Dimensional Scaling
  • Optimization
  • InformationValue package

r-statistics.co by Selva Prabhakaran

Stay up-to-date. Subscribe!

How to contribute

Edit this page

Linear regression is used to predict the value of an outcome variable Y based on one or more input predictor variables X . The aim is to establish a linear relationship (a mathematical formula) between the predictor variable(s) and the response variable, so that, we can use this formula to estimate the value of the response Y , when only the predictors ( X s ) values are known.

Introduction

The aim of linear regression is to model a continuous variable Y as a mathematical function of one or more X variable(s), so that we can use this regression model to predict the Y when only the X is known. This mathematical equation can be generalized as follows:

Y  =  β 1  +  β 2 X  +  ϵ

where, β 1 is the intercept and β 2 is the slope. Collectively, they are called regression coefficients . ϵ is the error term, the part of Y the regression model is unable to explain.

Example Problem

For this analysis, we will use the cars dataset that comes with R by default. cars is a standard built-in dataset, that makes it convenient to demonstrate linear regression in a simple and easy to understand fashion. You can access this dataset simply by typing in cars in your R console. You will find that it consists of 50 observations(rows) and 2 variables (columns) – dist and speed . Lets print out the first six observations here..

Before we begin building the regression model, it is a good practice to analyze and understand the variables. The graphical analysis and correlation study below will help with this.

Graphical Analysis

The aim of this exercise is to build a simple regression model that we can use to predict Distance (dist) by establishing a statistically significant linear relationship with Speed (speed). But before jumping in to the syntax, lets try to understand these variables graphically. Typically, for each of the independent variables (predictors), the following plots are drawn to visualize the following behavior:

  • Scatter plot : Visualize the linear relationship between the predictor and response
  • Box plot : To spot any outlier observations in the variable. Having outliers in your predictor can drastically affect the predictions as they can easily affect the direction/slope of the line of best fit.
  • Density plot : To see the distribution of the predictor variable. Ideally, a close to normal distribution (a bell shaped curve), without being skewed to the left or right is preferred. Let us see how to make each one of them.

Scatter Plot

Scatter plots can help visualize any linear relationships between the dependent (response) variable and independent (predictor) variables. Ideally, if you are having multiple predictor variables, a scatter plot is drawn for each one of them against the response, along with the line of best as seen below.

The scatter plot along with the smoothing line above suggests a linearly increasing relationship between the ‘dist’ and ‘speed’ variables. This is a good thing, because, one of the underlying assumptions in linear regression is that the relationship between the response and predictor variables is linear and additive.

BoxPlot – Check for outliers

Generally, any datapoint that lies outside the 1.5 * interquartile-range ( 1.5 *  I Q R ) is considered an outlier, where, IQR is calculated as the distance between the 25th percentile and 75th percentile values for that variable.

Density plot – Check if the response variable is close to normality

Correlation.

Correlation is a statistical measure that suggests the level of linear dependence between two variables, that occur in pair – just like what we have here in speed and dist. Correlation can take values between -1 to +1. If we observe for every instance where speed increases, the distance also increases along with it, then there is a high positive correlation between them and therefore the correlation between them will be closer to 1. The opposite is true for an inverse relationship, in which case, the correlation between the variables will be close to -1.

A value closer to 0 suggests a weak relationship between the variables. A low correlation (-0.2 < x < 0.2) probably suggests that much of variation of the response variable ( Y ) is unexplained by the predictor ( X ), in which case, we should probably look for better explanatory variables.

Build Linear Model

Now that we have seen the linear relationship pictorially in the scatter plot and by computing the correlation, lets see the syntax for building the linear model. The function used for building linear models is lm() . The lm() function takes in two main arguments, namely: 1. Formula 2. Data. The data is typically a data.frame and the formula is a object of class formula . But the most common convention is to write out the formula directly in place of the argument as written below.

Now that we have built the linear model, we also have established the relationship between the predictor and response in the form of a mathematical formula for Distance (dist) as a function for speed. For the above output, you can notice the ‘Coefficients’ part having two components: Intercept : -17.579, speed : 3.932 These are also called the beta coefficients. In other words, d i s t  =  I n t e r c e p t  + ( β  ∗  s p e e d ) => dist = −17.579 + 3.932∗speed

Linear Regression Diagnostics

Now the linear model is built and we have a formula that we can use to predict the dist value if a corresponding speed is known. Is this enough to actually use this model? NO! Before using a regression model, you have to ensure that it is statistically significant. How do you ensure this? Lets begin by printing the summary statistics for linearMod.

The p Value: Checking for statistical significance

The summary statistics above tells us a number of things. One of them is the model p-Value (bottom last line) and the p-Value of individual predictor variables (extreme right column under ‘Coefficients’). The p-Values are very important because, We can consider a linear model to be statistically significant only when both these p-Values are less that the pre-determined statistical significance level, which is ideally 0.05. This is visually interpreted by the significance stars at the end of the row. The more the stars beside the variable’s p-Value, the more significant the variable.

Null and alternate hypothesis

When there is a p-value, there is a hull and alternative hypothesis associated with it. In Linear Regression, the Null Hypothesis is that the coefficients associated with the variables is equal to zero. The alternate hypothesis is that the coefficients are not equal to zero (i.e. there exists a relationship between the independent variable in question and the dependent variable).

We can interpret the t-value something like this. A larger t-value indicates that it is less likely that the coefficient is not equal to zero purely by chance. So, higher the t-value, the better.

Pr(>|t|) or p-value is the probability that you get a t-value as high or higher than the observed value when the Null Hypothesis (the β coefficient is equal to zero or that there is no relationship) is true. So if the Pr(>|t|) is low, the coefficients are significant (significantly different from zero). If the Pr(>|t|) is high, the coefficients are not significant.

What this means to us? when p Value is less than significance level (< 0.05), we can safely reject the null hypothesis that the co-efficient β of the predictor is zero. In our case, linearMod , both these p-Values are well below the 0.05 threshold, so we can conclude our model is indeed statistically significant.

It is absolutely important for the model to be statistically significant before we can go ahead and use it to predict (or estimate) the dependent variable, otherwise, the confidence in predicted values from that model reduces and may be construed as an event of chance.

How to calculate the t Statistic and p-Values?

When the model co-efficients and standard error are known, the formula for calculating t Statistic and p-Value is as follows: $$t−Statistic = {β−coefficient \over Std.Error}$$

R-Squared and Adj R-Squared

The actual information in a data is the total variation it contains, remember?. What R-Squared tells us is the proportion of variation in the dependent (response) variable that has been explained by this model.

$$ R^{2} = 1 - \frac{SSE}{SST}$$

where, S S E is the sum of squared errors given by $SSE = \sum_{i}^{n} \left( y_{i} - \hat{y_{i}} \right) ^{2}$ and $SST = \sum_{i}^{n} \left( y_{i} - \bar{y_{i}} \right) ^{2}$ is the sum of squared total . Here, $\hat{y_{i}}$ is the fitted value for observation i and $\bar{y}$ is the mean of Y .

We don’t necessarily discard a model based on a low R-Squared value. Its a better practice to look at the AIC and prediction accuracy on validation sample when deciding on the efficacy of a model.

Now thats about R-Squared. What about adjusted R-Squared? As you add more X variables to your model, the R-Squared value of the new bigger model will always be greater than that of the smaller subset. This is because, since all the variables in the original model is also present, their contribution to explain the dependent variable will be present in the super-set as well, therefore, whatever new variable we add can only add (if not significantly) to the variation that was already explained. It is here, the adjusted R-Squared value comes to help. Adj R-Squared penalizes total value for the number of terms (read predictors) in your model. Therefore when comparing nested models, it is a good practice to look at adj-R-squared value over R-squared.

$$ R^{2}_{adj} = 1 - \frac{MSE}{MST}$$

where, M S E is the mean squared error given by $MSE = \frac{SSE}{\left( n-q \right)}$ and $MST = \frac{SST}{\left( n-1 \right)}$ is the mean squared total , where n is the number of observations and q is the number of coefficients in the model.

Therefore, by moving around the numerators and denominators, the relationship between R 2 and R a d j 2 becomes:

$$R^{2}_{adj} = 1 - \left( \frac{\left( 1 - R^{2}\right) \left(n-1\right)}{n-q}\right)$$

Standard Error and F-Statistic

Both standard errors and F-statistic are measures of goodness of fit.

$$Std. Error = \sqrt{MSE} = \sqrt{\frac{SSE}{n-q}}$$

$$F-statistic = \frac{MSR}{MSE}$$

where, n is the number of observations, q is the number of coefficients and M S R is the mean square regression , calculated as,

$$MSR=\frac{\sum_{i}^{n}\left( \hat{y_{i} - \bar{y}}\right)}{q-1} = \frac{SST - SSE}{q - 1}$$

AIC and BIC

The Akaike’s information criterion - AIC (Akaike, 1974) and the Bayesian information criterion - BIC (Schwarz, 1978) are measures of the goodness of fit of an estimated statistical model and can also be used for model selection. Both criteria depend on the maximized value of the likelihood function L for the estimated model.

The AIC is defined as:

A I C  = (−2) ×  l n ( L ) + (2× k )

where, k is the number of model parameters and the BIC is defined as:

B I C  = (−2) ×  l n ( L ) +  k  ×  l n ( n )

where, n is the sample size.

For model comparison, the model with the lowest AIC and BIC score is preferred.

How to know if the model is best fit for your data?

The most common metrics to look at while selecting the model are:

STATISTIC CRITERION
R-Squared Higher the better
Adj R-Squared Higher the better
F-Statistic Higher the better
Std. Error Closer to zero the better
t-statistic Should be greater 1.96 for p-value to be less than 0.05
AIC Lower the better
BIC Lower the better
Mallows cp Should be close to the number of predictors in model
MAPE (Mean absolute percentage error) Lower the better
MSE (Mean squared error) Lower the better
Min_Max Accuracy => mean(min(actual, predicted)/max(actual, predicted)) Higher the better

Predicting Linear Models

So far we have seen how to build a linear regression model using the whole dataset. If we build it that way, there is no way to tell how the model will perform with new data. So the preferred practice is to split your dataset into a 80:20 sample (training:test), then, build the model on the 80% sample and then use the model thus built to predict the dependent variable on test data.

Doing it this way, we will have the model predicted values for the 20% data (test) as well as the actuals (from the original dataset). By calculating accuracy measures (like min_max accuracy) and error rates (MAPE or MSE), we can find out the prediction accuracy of the model. Now, lets see how to actually do this..

Step 1: Create the training (development) and test (validation) data samples from original data.

Step 2: develop the model on the training data and use it to predict the distance on test data, step 3: review diagnostic measures..

From the model summary, the model p value and predictor’s p value are less than the significance level, so we know we have a statistically significant model. Also, the R-Sq and Adj R-Sq are comparative to the original model built on full data.

Step 4: Calculate prediction accuracy and error rates

A simple correlation between the actuals and predicted values can be used as a form of accuracy measure. A higher correlation accuracy implies that the actuals and predicted values have similar directional movement, i.e. when the actuals values increase the predicteds also increase and vice-versa.

Now lets calculate the Min Max accuracy and MAPE: $$MinMaxAccuracy = mean \left( \frac{min\left(actuals, predicteds\right)}{max\left(actuals, predicteds \right)} \right)$$

$$MeanAbsolutePercentageError \ (MAPE) = mean\left( \frac{abs\left(predicteds−actuals\right)}{actuals}\right)$$

k- Fold Cross validation

Suppose, the model predicts satisfactorily on the 20% split (test data), is that enough to believe that your model will perform equally well all the time? It is important to rigorously test the model’s performance as much as possible. One way is to ensure that the model equation you have will perform well, when it is ‘built’ on a different subset of training data and predicted on the remaining data.

How to do this is? Split your data into ‘k’ mutually exclusive random sample portions. Keeping each portion as test data, we build the model on the remaining (k-1 portion) data and calculate the mean squared error of the predictions. This is done for each of the ‘k’ random sample portions. Then finally, the average of these mean squared errors (for ‘k’ portions) is computed. We can use this metric to compare different linear models.

By doing this, we need to check two things:

  • If the model’s prediction accuracy isn’t varying too much for any one particular sample, and
  • If the lines of best fit don’t vary too much with respect the the slope and level.

In other words, they should be parallel and as close to each other as possible. You can find a more detailed explanation for interpreting the cross validation charts when you learn about advanced linear model building.

In the below plot, Are the dashed lines parallel? Are the small and big symbols are not over dispersed for one particular color?

Where to go from here?

We have covered the basic concepts about linear regression. Besides these, you need to understand that linear regression is based on certain underlying assumptions that must be taken care especially when working with multiple X s . Once you are familiar with that, the advanced regression models will show you around the various special cases where a different form of regression would be more suitable.

© 2016-17 Selva Prabhakaran. Powered by jekyll , knitr , and pandoc . This work is licensed under the Creative Commons License.

  • 5.6 - The General Linear F-Test

The " general linear F-test " involves three basic steps, namely:

  • Define a larger full model . (By "larger," we mean one with more parameters.)
  • Define a smaller reduced model . (By "smaller," we mean one with fewer parameters.)
  • Use an F- statistic to decide whether or not to reject the smaller reduced model in favor of the larger full model.

As you can see by the wording of the third step, the null hypothesis always pertains to the reduced model, while the alternative hypothesis always pertains to the full model.

The easiest way to learn about the general linear F-test is to first go back to what we know, namely the simple linear regression model. Once we understand the general linear F-test for the simple case, we then see that it can be easily extended to the multiple case. We take that approach here.

The full model

The " full model ", which is also sometimes referred to as the " unrestricted model ," is the model thought to be most appropriate for the data. For simple linear regression, the full model is:

\[y_i=(\beta_0+\beta_1x_{i1})+\epsilon_i\]

Here's a plot of a hypothesized full model for a set of data that we worked with previously in this course (student heights and grade point averages):

plot

And, here's another plot of a hypothesized full model that we previously encountered (state latitudes and skin cancer mortalities):

plot

In each plot, the solid line represents what the hypothesized population regression line might look like for the full model. The question we have to answer in each case is "does the full model describe the data well?" Here, we might think that the full model does well in summarizing the trend in the second plot but not the first.

The reduced model

The " reduced model ," which is sometimes also referred to as the " restricted model ," is the model described by the null hypothesis H 0 . For simple linear regression, a common null hypothesis is H 0 : β 1 = 0. In this case, the reduced model is obtained by "zeroing-out" the slope β 1 that appears in the full model. That is, the reduced model is:

\[y_i=\beta_0+\epsilon_i\]

This reduced model suggests that each response y i is a function only of some overall mean, β 0 , and some error ε i .

Let's take another look at the plot of student grade point average against height, but this time with a line representing what the hypothesized population regression line might look like for the reduced model:

plot

Not bad — there (fortunately?!) doesn't appear to be a relationship between height and grade point average. And, it appears as if the reduced model might be appropriate in describing the lack of a relationship between heights and grade point averages. How does the reduced model do for the skin cancer mortality example?

plot

It doesn't appear as if the reduced model would do a very good job of summarizing the trend in the population.

How do we decide if the reduced model or the full model does a better job of describing the trend in the data when it can't be determined by simply looking at a plot? What we need to do is to quantify how much error remains after fitting each of the two models to our data. That is, we take the general linear F-test approach:

  • Obtain the least squares estimates of β 0 and β 1 .
  • Determine the error sum of squares, which we denote " SSE ( F )."
  • Obtain the least squares estimate of β 0 .
  • Determine the error sum of squares, which we denote " SSE ( R )."

Recall that, in general, the error sum of squares is obtained by summing the squared distances between the observed and fitted (estimated) responses:

\[\sum(\text{observed } - \text{ fitted})^2\]

Therefore, since \(y_i\) is the observed response and \(\hat{y}_i\) is the fitted response for the full model :

\[SSE(F)=\sum(y_i-\hat{y}_i)^2\]

And, since \(y_i\) is the observed response and \(\bar{y}\) is the fitted response for the reduced model :

\[SSE(R)=\sum(y_i-\bar{y})^2\]

Let's get a better feel for the general linear F-test approach by applying it to two different two datasets. First, let's look at the heightgpa data . The following plot of grade point averages against heights contains two estimated regression lines — the solid line is the estimated line for the full model, and the dashed line is the estimated line for the reduced model:

plot

As you can see, the estimated lines are almost identical. Calculating the error sum of squares for each model, we obtain:

\[SSE(F)=\sum(y_i-\hat{y}_i)^2=9.7055\]

\[SSE(R)=\sum(y_i-\bar{y})^2=9.7331\]

The two quantities are almost identical. Adding height to the reduced model to obtain the full model reduces the amount of error by only 0.0276 (from 9.7331 to 9.7055). That is, adding height to the model does very little in reducing the variability in grade point averages. In this case, there appears to be no advantage in using the larger full model over the simpler reduced model.

Look what happens when we fit the full and reduced models to the skin cancer mortality and latitude dataset :

plot

Here, there is quite a big difference in the estimated equation for the reduced model (solid line) and the estimated equation for the full model (dashed line). The error sums of squares quantify the substantial difference in the two estimated equations:

\[SSE(F)=\sum(y_i-\hat{y}_i)^2=17173\]

\[SSE(R)=\sum(y_i-\bar{y})^2=53637\]

Adding latitude to the reduced model to obtain the full model reduces the amount of error by 36464 (from 53637 to 17173). That is, adding latitude to the model substantially reduces the variability in skin cancer mortality. In this case, there appears to be a big advantage in using the larger full model over the simpler reduced model.

Where are we going with this general linear F-test approach? In short:

  • The general linear F-test involves a comparison between SSE ( R ) and SSE ( F ).
  • If SSE ( F ) is close to SSE ( R ), then the variation around the estimated full model regression function is almost as large as the variation around the estimated reduced model regression function. If that's the case, it makes sense to use the simpler reduced model.
  • On the other hand, if SSE ( F ) and SSE ( R ) differ greatly, then the additional parameter(s) in the full model substantially reduce the variation around the estimated regression function. In this case, it makes sense to go with the larger full model.

How different does SSE ( R ) have to be from SSE ( F ) in order to justify using the larger full model? The general linear F -statistic:

\[F^*=\left( \frac{SSE(R)-SSE(F)}{df_R-df_F}\right)\div\left( \frac{SSE(F)}{df_F}\right)\]

helps answer this question. The F -statistic intuitively makes sense — it is a function of SSE ( R )- SSE ( F ), the difference in the error between the two models. The degrees of freedom — denoted df R and df F — are those associated with the reduced and full model error sum of squares, respectively.

We use the general linear F -statistic to decide whether or not:

  • to reject the null hypothesis H 0 : the reduced model,
  • in favor of the alternative hypothesis H A : the full model.

In general, we reject H 0 if F * is large — or equivalently if its associated P -value is small.

The test applied to the simple linear regression model

For simple linear regression, it turns out that the general linear F -test is just the same ANOVA F -test that we learned before. As noted earlier for the simple linear regression case, the full model is:

and the reduced model is:

Therefore, the appropriate null and alternative hypotheses are specified either as:

  • H 0 : y i = β 0 + ε i
  • H A : y i = β 0 + β 1 x i + ε i
  • H 0 : β 1 = 0
  • H A : β 1 ≠ 0

The degrees of freedom associated with the error sum of squares for the reduced model is n -1, and:

\[SSE(R)=\sum(y_i-\bar{y})^2=SSTO\]

The degrees of freedom associated with the error sum of squares for the full model is n -2, and:

\[SSE(F)=\sum(y_i-\hat{y}_i)^2=SSE\]

Now, we can see how the general linear F -statistic just reduces algebraically to the ANOVA F -test that we know:

\(F^*=\left( \frac{SSE(R)-SSE(F)}{df_R-df_F}\right)\div\left( \frac{SSE(F)}{df_F}\right)\)

- 1

= - 2

( )

\(F^*=\left( \frac{SSTO-SSE}{(n-1)-(n-2)}\right)\div\left( \frac{SSE}{(n-2)}\right)=\frac{MSR}{MSE}\)

That is, the general linear F -statistic reduces to the ANOVA F -statistic:

\[F^*=\frac{MSR}{MSE}\]

For the student height and grade point average example:

\[F^*=\frac{MSR}{MSE}=\frac{0.0276/1}{9.7055/33}=\frac{0.0276}{0.2941}=0.094\]

For the skin cancer mortality example:

\[F^*=\frac{MSR}{MSE}=\frac{36464/1}{17173/47}=\frac{36464}{365.4}=99.8\]

The P -value is calculated as usual. The P -value answers the question: "what is the probability that we’d get an F* statistic as large as we did, if the null hypothesis were true?" The P -value is determined by comparing F * to an F distribution with 1 numerator degree of freedom and n -2 denominator degrees of freedom. For the student height and grade point average example, the P -value is 0.761 (so we fail to reject H 0 and we favor the reduced model), while for the skin cancer mortality example, the P -value is 0.000 (so we reject H 0 and we favor the full model).

Does alcoholism have an effect on muscle strength? Some researchers (Urbano-Marquez, et al , 1989) who were interested in answering this question collected the following data ( alcoholarm.txt ) on a sample of 50 alcoholic men:

  • x = the total lifetime dose of alcohol ( kg per kg of body weight) consumed
  • y = the strength of the deltoid muscle in the man's non-dominant arm

The full model is the model that would summarize a linear relationship between alcohol consumption and arm strength. The reduced model, on the other hand, is the model that claims there is no relationship between alcohol consumption and arm strength.

Upon fitting the reduced model to the data, we obtain:

plot

\[SSE(R)=\sum(y_i-\bar{y})^2=1224.32\]

Note that the reduced model does not appear to summarize the trend in the data very well.

Upon fitting the full model to the data, we obtain:

plot

\[SSE(F)=\sum(y_i-\hat{y}_i)^2=720.27\]

The full model appears to decribe the trend in the data better than the reduced model.

The good news is that in the simple linear regression case, we don't have to bother with calculating the general linear F -statistic. Statistical software does it for us in the ANOVA table:

As you can see, the output reports both SSE ( F ) — the amount of error associated with the full model — and SSE ( R ) — the amount of error associated with the reduced model. The F -statistic is:

\[F^*=\frac{MSR}{MSE}=\frac{504.04/1}{720.27/48}=\frac{504.04}{15.006}=33.59\]

and its associated P -value is < 0.001 (so we reject H 0 and we favor the full model). We can conclude that there is a statistically significant linear association between lifetime alcohol consumption and arm strength.

Start Here!

  • Welcome to STAT 462!
  • Search Course Materials
  • Lesson 1: Statistical Inference Foundations
  • Lesson 2: Simple Linear Regression (SLR) Model
  • Lesson 3: SLR Evaluation
  • Lesson 4: SLR Assumptions, Estimation & Prediction
  • 5.1 - Example on IQ and Physical Characteristics
  • 5.2 - Example on Underground Air Quality
  • 5.3 - The Multiple Linear Regression Model
  • 5.4 - A Matrix Formulation of the Multiple Regression Model
  • 5.5 - Three Types of MLR Parameter Tests
  • 5.7 - MLR Parameter Tests
  • 5.8 - Partial R-squared
  • 5.9- Further MLR Examples
  • Lesson 6: MLR Assumptions, Estimation & Prediction
  • Lesson 7: Transformations & Interactions
  • Lesson 8: Categorical Predictors
  • Lesson 9: Influential Points
  • Lesson 10: Regression Pitfalls
  • Lesson 11: Model Building
  • Lesson 12: Logistic, Poisson & Nonlinear Regression
  • Website for Applied Regression Modeling, 2nd edition
  • Notation Used in this Course
  • R Software Help
  • Minitab Software Help

Penn State Science

Copyright © 2018 The Pennsylvania State University Privacy and Legal Statements Contact the Department of Statistics Online Programs

How to Check Linear Regression Assumptions (and What to Do If They Fail)

Linear regression is a powerful statistical tool used to model the relationship between a dependent variable and one or more independent variables. However, the validity and reliability of linear regression analysis hinge on several key assumptions. If these assumptions are violated, the results of the analysis can be misleading or even invalid. In this comprehensive guide, we will delve into the essential assumptions of linear regression, explore how to check them, and provide practical solutions for addressing potential violations.  

Introduction

Linear regression is a cornerstone of statistical modeling, widely employed in various fields, from economics and finance to social sciences and engineering. Its simplicity and interpretability make it a popular choice for understanding the relationships between variables. However, like any statistical method, linear regression relies on a set of assumptions to ensure the accuracy and meaningfulness of its results.

When these assumptions are met, linear regression provides unbiased and efficient estimates of the model parameters. However, when these assumptions are violated, the results can be biased, inefficient, or even completely invalid. Therefore, it’s crucial to understand these assumptions, assess whether they hold in your data, and take appropriate corrective measures if they don’t.

The Key Assumptions of Linear Regression

Before we dive into the specifics of checking and addressing assumption violations, let’s first outline the key assumptions underlying linear regression:

  • Linearity: The relationship between the independent variables and the dependent variable is linear.
  • Independence: The errors (residuals) are independent of each other.
  • Homoscedasticity: The variance of the errors is constant across all levels of the independent variables.
  • Normality: The errors are normally distributed.

Checking the Assumptions

Now that we understand the key assumptions, let’s explore how to check whether they hold in your data.

1. Checking for Linearity

The linearity assumption states that the relationship between the independent variables and the dependent variable is linear. This means that a straight line can adequately represent the relationship.

How to check:

  • Scatterplots: The most straightforward way to check for linearity is to create scatterplots of the dependent variable against each independent variable. If the relationship appears to be linear, the points should roughly form a straight line.
  • Residual plots: Another useful tool is a residual plot, which plots the residuals (the differences between the actual and predicted values) against the fitted values (the predicted values from the regression model). If the linearity assumption holds, the residuals should be randomly scattered around zero, with no discernible pattern.

<response-element_nghost-ng-c2478939204=”” ng-version=”0.0.0-PLACEHOLDER”>

<divrole=”presentation” data-mprt=”5″ style=”position: absolute; overflow: hidden; left: 0px; width: 5px; height: 5px;”><divstyle=”position: absolute; overflow: hidden; width: 1e+06px; height: 1e+06px; transform: translate3d(0px, 0px, 0px); contain: strict; top: 0px; left: 0px;”><divrole=”presentation” aria-hidden=”true” style=”position: absolute; font-family: "Google Sans Mono", Consolas, "Courier New", monospace; font-weight: normal; font-size: 14px; font-feature-settings: "liga" 0, "calt" 0; font-variation-settings: normal; line-height: 18px; letter-spacing: 0px; height: 0px; width: 45px;”>

<divrole=”presentation” aria-hidden=”true” style=”position: absolute;”>

<divrole=”presentation” aria-hidden=”true” data-mprt=”7″ style=”position: absolute; font-family: "Google Sans Mono", Consolas, "Courier New", monospace; font-weight: normal; font-size: 14px; font-feature-settings: "liga" 0, "calt" 0; font-variation-settings: normal; line-height: 18px; letter-spacing: 0px; width: 45px; height: 18px;”>

<canvasaria-hidden=”true” width=”14″ height=”5″ style=”position: absolute; transform: translate3d(0px, 0px, 0px); contain: strict; top: 0px; right: 0px; width: 14px; height: 5px; display: block;”>

What to do if the linearity assumption fails:

  • Transformations: If the relationship appears to be non-linear, you can try transforming the independent or dependent variables. Common transformations include log transformations, square root transformations, and polynomial transformations.
  • Non-linear regression: If transformations don’t work, you might need to consider using a non-linear regression model.

2. Checking for Independence

The independence assumption states that the errors (residuals) are independent of each other. This means that the error in one observation should not be related to the error in another observation.

  • Durbin-Watson test: The Durbin-Watson test is a statistical test that checks for autocorrelation (correlation between errors at different time points) in the residuals. The test statistic ranges from 0 to 4, with a value of 2 indicating no autocorrelation. Values significantly less than 2 suggest positive autocorrelation, while values significantly greater than 2 suggest negative autocorrelation.

What to do if the independence assumption fails:

  • Time series models: If autocorrelation is present (especially in time series data), you might need to consider using time series models that explicitly account for the dependence between observations.
  • Generalized least squares: In some cases, you can use generalized least squares (GLS) regression, which allows for correlated errors.

3. Checking for Homoscedasticity

The homoscedasticity assumption states that the variance of the errors is constant across all levels of the independent variables. This means that the spread of the residuals should be roughly the same across the range of fitted values.  

  • Residual plots: The residual plot mentioned earlier can also be used to check for homoscedasticity. If the homoscedasticity assumption holds, the residuals should be evenly scattered around zero, with no fanning out or funneling in pattern.
  • Breusch-Pagan test: The Breusch-Pagan test is a statistical test that checks for heteroscedasticity (non-constant variance of errors). The null hypothesis is homoscedasticity. If the p-value is significant (typically less than 0.05), it suggests heteroscedasticity.

What to do if the homoscedasticity assumption fails:

  • Transformations: Transforming the dependent variable can sometimes help stabilize the variance.
  • Weighted least squares: In weighted least squares regression, you assign weights to the observations based on their estimated variance. This gives more weight to observations with lower variance and less weight to observations with higher variance.

4. Checking for Normality

The normality assumption states that the errors are normally distributed. This means that if you were to plot a histogram of the residuals, it should roughly resemble a bell-shaped curve.

  • Histogram of residuals: A histogram of the residuals can provide a visual check for normality.
  • Normal probability plot (Q-Q plot): A Q-Q plot compares the quantiles of the residuals to the quantiles of a normal distribution. If the normality assumption holds, the points should roughly fall along a straight line.
  • Shapiro-Wilk test: The Shapiro-Wilk test is a statistical test that checks for normality. The null hypothesis is normality. If the p-value is significant (typically less than 0.05), it suggests non-normality.

What to do if the normality assumption fails:

  • Transformations: Transforming the dependent variable can sometimes help normalize the residuals.
  • Robust regression: Robust regression methods are less sensitive to outliers and deviations from normality.

5. Checking for Multicollinearity (in Multiple Linear Regression)

The no multicollinearity assumption states that the independent variables are not highly correlated with each other. Multicollinearity can make it difficult to interpret the individual effects of the independent variables and can lead to unstable estimates.  

  • Correlation matrix: Calculate the correlation matrix between the independent variables. High correlations (typically above 0.7 or 0.8) can indicate multicollinearity.

Share this:

Teach yourself statistics

Hypothesis Test for Regression Slope

This lesson describes how to conduct a hypothesis test to determine whether there is a significant linear relationship between an independent variable X and a dependent variable Y .

The test focuses on the slope of the regression line

Y = Β 0 + Β 1 X

where Β 0 is a constant, Β 1 is the slope (also called the regression coefficient), X is the value of the independent variable, and Y is the value of the dependent variable.

If we find that the slope of the regression line is significantly different from zero, we will conclude that there is a significant relationship between the independent and dependent variables.

Test Requirements

The approach described in this lesson is valid whenever the standard requirements for simple linear regression are met.

  • The dependent variable Y has a linear relationship to the independent variable X .
  • For each value of X, the probability distribution of Y has the same standard deviation σ.
  • The Y values are independent.
  • The Y values are roughly normally distributed (i.e., symmetric and unimodal ). A little skewness is ok if the sample size is large.

The test procedure consists of four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results.

State the Hypotheses

If there is a significant linear relationship between the independent variable X and the dependent variable Y , the slope will not equal zero.

H o : Β 1 = 0

H a : Β 1 ≠ 0

The null hypothesis states that the slope is equal to zero, and the alternative hypothesis states that the slope is not equal to zero.

Formulate an Analysis Plan

The analysis plan describes how to use sample data to accept or reject the null hypothesis. The plan should specify the following elements.

  • Significance level. Often, researchers choose significance levels equal to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be used.
  • Test method. Use a linear regression t-test (described in the next section) to determine whether the slope of the regression line differs significantly from zero.

Analyze Sample Data

Using sample data, find the standard error of the slope, the slope of the regression line, the degrees of freedom, the test statistic, and the P-value associated with the test statistic. The approach described in this section is illustrated in the sample problem at the end of this lesson.

Predictor Coef SE Coef T P
Constant 76 30 2.53 0.01
X 35 20 1.75 0.04

SE = s b 1 = sqrt [ Σ(y i - ŷ i ) 2 / (n - 2) ] / sqrt [ Σ(x i - x ) 2 ]

  • Slope. Like the standard error, the slope of the regression line will be provided by most statistics software packages. In the hypothetical output above, the slope is equal to 35.

t = b 1 / SE

  • P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic is a t statistic, use the t Distribution Calculator to assess the probability associated with the test statistic. Use the degrees of freedom computed above.

Interpret Results

If the sample findings are unlikely, given the null hypothesis, the researcher rejects the null hypothesis. Typically, this involves comparing the P-value to the significance level , and rejecting the null hypothesis when the P-value is less than the significance level.

Test Your Understanding

The local utility company surveys 101 randomly selected customers. For each survey participant, the company collects the following: annual electric bill (in dollars) and home size (in square feet). Output from a regression analysis appears below.

Annual bill = 0.55 * Home size + 15

Predictor Coef SE Coef T P
Constant 15 3 5.0 0.00
Home size 0.55 0.24 2.29 0.01

Is there a significant linear relationship between annual bill and home size? Use a 0.05 level of significance.

The solution to this problem takes four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results. We work through those steps below:

H o : The slope of the regression line is equal to zero.

H a : The slope of the regression line is not equal to zero.

  • Formulate an analysis plan . For this analysis, the significance level is 0.05. Using sample data, we will conduct a linear regression t-test to determine whether the slope of the regression line differs significantly from zero.

We get the slope (b 1 ) and the standard error (SE) from the regression output.

b 1 = 0.55       SE = 0.24

We compute the degrees of freedom and the t statistic, using the following equations.

DF = n - 2 = 101 - 2 = 99

t = b 1 /SE = 0.55/0.24 = 2.29

where DF is the degrees of freedom, n is the number of observations in the sample, b 1 is the slope of the regression line, and SE is the standard error of the slope.

  • Interpret results . Since the P-value (0.0242) is less than the significance level (0.05), we cannot accept the null hypothesis.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Appl Stat
  • v.48(5); 2021

Significance test for linear regression: how to test without P -values?

Paravee maneejuk.

Center of Excellence in Econometrics, Faculty of Economics, Chiang Mai University, Chiang Mai, Thailand

Woraphon Yamaka

The discussion on the use and misuse of p -values in 2016 by the American Statistician Association was a timely assertion that statistical concept should be properly used in science. Some researchers, especially the economists, who adopt significance testing and p -values to report their results, may felt confused by the statement, leading to misinterpretations of the statement. In this study, we aim to re-examine the accuracy of the p -value and introduce an alternative way for testing the hypothesis. We conduct a simulation study to investigate the reliability of the p -value. Apart from investigating the performance of p -value, we also introduce some existing approaches, Minimum Bayes Factors and Belief functions, for replacing p -value. Results from the simulation study confirm unreliable p -value in some cases and that our proposed approaches seem to be useful as the substituted tool in the statistical inference. Moreover, our results show that the plausibility approach is more accurate for making decisions about the null hypothesis than the traditionally used p -values when the null hypothesis is true. However, the MBFs of Edwards et al. [ Bayesian statistical inference for psychological research . Psychol. Rev. 70(3) (1963), pp. 193–242]; Vovk [ A logic of probability, with application to the foundations of statistics . J. Royal Statistical Soc. Series B (Methodological) 55 (1993), pp. 317–351] and Sellke et al. [ Calibration of p values for testing precise null hypotheses . Am. Stat. 55(1) (2001), pp. 62–71] provide more reliable results compared to all other methods when the null hypothesis is false.

1. Introduction

Our work has been inspired by a statement on statistical significance and p -values of the American Statistical Association (ASA) in 2016 [ 23 ]. They stated that p -value does not provide a good measure of evidence regarding a model or hypothesis whose validity or significance should not be based only on whether its p -value passes a specific threshold, for example, 0.10, 0.05, or 0.01. This statement indicates that in many scientific disciplines, the use of p -value to make the decision on tests of hypotheses may have led to a large number of wrong discoveries. Some researchers, especially the economists, who adopt significance testing and p -values to report their results, may felt confused by the statement, leading to misinterpretations of the statement. The econometric models and statistical tests have been intensively used by economists for interpreting causal effects, model selection, and forecasting model. The question is how to test and make an inference without p -values.

The discussion of this issue is not quite new. The critiques of this issue were started in Berkson [ 2 ], Rozeboom [ 16 ], Cohen [ 3 ], and the review of the studies attempting in banning p -values refers to Kline [ 13 ]. The motivation for banning p -values is a concern with the logic that underlies significance testing and p -value. One of the most prominent problems of the p -value is that many researchers misunderstand that p -value is the probability of the null hypothesis. Indeed, a p -value does not have any meaning in this regard [ 14 ] and [ 7 ]. The misconceptions about the interpretation of p -value are explained in the work of Assaf and Tsionas [ 1 ]. They provided a simple explanation of the problem in making an inference from p -value, for example, if the p -value is less than 0.05, we have enough evidence to reject the null hypothesis and accept the claim. By this conviction in the regression framework, we must reject the null hypothesis ( H 0 : β = 0 ) . While this is fine, the interpretation can be misleading as the p -value is only the probability of the observed results, regardless of the value of the β . Intuitively, we make the same interpretation of β = 0 under the range of p -values less than 0.05. This indicates that the p -value provides indirect evidence about the null hypothesis, as the parameters are assumed to be fixed for all p -values less than 0.05. In addition, it is known that the p -value might overstate the evidence against the null [ 7 , 12 , 21 , 1 ]. Another problem of using p -value is that there exhibits a high dependence of the p -value on the sample size. It was evident that a smaller sample size can yield higher p -value and vice versa, see Rouder et al . [ 17 ]. Thus, if we do not have large enough sample size, the interpretation might be wrong as it is difficult to obtain the accurate testing result, especially in the case that the null hypothesis must be accepted (the null hypothesis is true). Also, it is too dangerous to have only the binary decisions (e.g. whether to reject or accept the null hypothesis). It is the extreme binary view that, in our opinion, has caused various problems to the decision making. Stern [ 21 ] mentioned that non-significant p -value indicates that the data could easily be observed under the null hypothesis. However, the data could also be observed under a range of alternative hypotheses. Thus, it is overconfident to make the decision based on this binary approach and it may contribute to a misunderstanding of the test results.

Obviously, the p -value is currently misinterpreted, overtrusted, and misused in the research reports. Hence, this puts us in a difficult situation of testing against a null hypothesis. However, our discussion should not be taken as a statement for researchers and practitioners to completely avoid p -values. Rather, we should investigate some misconceptions about the p -values and find alternative methods that have better statistical interpretations and properties. Fortunately, there is an alternative approach to p -value. We refer to such an approach as the Bayesian counterpart, the Bayes factor method and the plausibility method. Our suggested methods are similar to some of the suggestions in the American Statistician in 2019 [ 24 ]. They have further discussed the problems of p -values and suggested the new guidelines for supplementing or replacing p -values. In this article, they suggested second-generation p -values, confidence intervals, false-positive risk and Bayes factor methods for replacing the conventional p -values.

We can start discussing the Bayes factor, which has been widely accepted as a valuable alternative to the p -value approach in these recent days (see [ 21 , 15 , 10 , 1 ]). Page and Satake [ 15 ] revealed that there are two main differences between p -value and Bayes factors. First, the calculation of p -value involves both observed data and ‘more extreme’ hypothetical outcomes, whereas the calculation of the Bayes factor can be obtained from observed data alone. Note that, in a Bayesian approach, the information from the observed data is normally combined with the priors for the parameter of interest. This is the point that sparks much of the debate regarding the Bayes methods, because the selection of prior may have much impact on the posterior distribution and conclusions. However, the calculation of the Bayes factor can be obtained from observed data alone by assuming the uniform prior on the parameter of interest. Second, a p -value is computed in relation to only the null hypothesis, whereas the Bayes factor considers both the null and alternative hypotheses. Many researchers confirmed that the Bayes factor is more suitable to address the problem of comparing hypotheses as it provides a natural framework for integrating a variety of sources of information about a quantity of interest. In other words, the statistical test based on this method relies on the combination of the information from the observed data and the prior information. Generally speaking, the prior information from the researcher is combined with the likelihood of the data to obtain the posterior distribution for constructing the Bayes factor. This posterior distribution explicitly addresses the information about the values of parameters, which are most plausible. Bayes factor becomes a measure of evidence regarding the two conflicting hypotheses and therefore, it can investigate whether the parameter of interest is equal to a specific value or not, say ( H 0 : β = b ) against ( H 1 : β ≠ b ) , in the regression context. Thus, in practice, the Bayes factor can be computed by the ratio of the posterior distribution of the null hypothesis and alternative hypothesis. Held and Ott [ 9 ] confirmed that Bayes factor can be considered as an alternative to the p -value for testing hypotheses and for quantifying the degree to which observed data support or conflict with a hypothesis (this approach is discussed further below). The additional information can be obtained from Stern [ 21 ]. However, in this study, we focus on the evidence against a simple null hypothesis provided by the Minimum Bayes factor (MBF) approach. In other words, this approach transforms p -value to a Bayes factor for making a new interpretation from the observed data (see [ 9 , 10 ]). In this context, MBF is usually oriented as p- values in that the smaller values provide stronger evidence against the null hypothesis [ 8 ].

Another approach considered in this study is the plausibility-based belief function (plausibility method) as proposed by Kanjanatarakul, Denoeux, Sriboonchitta [ 11 ]. This method is the extension of the MBF concept. While MBF focuses on transforming p -value or ( t -statistic) to obtain MBF, plausibility-based belief function considers transforming β to the plausibility (similar to p -value). The discussion of this method as the alternative to p -value for testing the hypotheses is quite limited. Thus, we attempt to fill the gap of the literature and suggest using this method as another alternative method for testing hypotheses. The method allows us to obtain the plausibility of each parameter in the range that we are considering. For example, if we want to know whether β = b , b = [ − 3 , 3 ] , we can find the plausibilities P l ( β = − 3 ) , … , P l ( β = 3 ) . Thus, if we want to test whether ( H 0 : β = b ) , we measure the plausibility P l ( β = b ) , and if the P l ( β = b ) is less than 0.05, we accept this null hypothesis. We can see that the p -value and the plausibility seem to provide similar information. However, Kanjanatarakul, Denoeux, and Sriboonchitta [ 11 ] mentioned that these two measures are completely different in interpretations. The p -value is the probability, under the hypothesis ( H 0 : β = b ) which is based on the assumption of repeated sampling, and it takes into account, in the computation of the probability, values of t -statistics larger than specific criteria values corresponding to p -values of 0.10, 0.05, and 0.01. In contrast, the assertion P l ( β = b ) = α indicates that there is a parameter β with β = b , whose likelihood conditional on the data is α times the maximum likelihood. The closer the value of α to zero, the higher the probability that β = b . This is to say we can obtain the P l ( β = b ) for all possible values of b . This means that the value of plausibility is directly dependent on the value of β . For more explanation of the Bayesian approach and the belief functions, refer to Shafer [ 20 ].

In this paper, we further explore these two methods as the alternative to the p -values by proving that under the same hypothesis, these two methods provide the direct probability statements about the parameter of interest and provide a more accurate and reliable result for inferential decision making. We conduct several experiments and illustrate the practical application of the methods using a simple regression model which is widely employed in many research works. The issues in the paper are further elaborated in another section. We will provide the background of the frequentistic p -value, the Bayes factor and plausibility methods (likelihood-based belief functions) in Section 2 , followed by the experiment and real application studies in Sections 3 and 4 , respectively. Finally, the conclusion is provided in Section 5 .

2. Review of the testing methods for accepting or rejecting the null hypothesis

2.1. the p - value.

Recall that the p -value is the probability of obtaining a test statistic equal to or more than the observed results under the assumption that the null hypothesis is true [ 21 ]. More precisely, it is a quantitative measure of discrepancy between the data and the point null hypothesis [ 7 ]. The basic rule for decision-making is that if the p -value of the sample data is less than the specific threshold or significant level at 0.01, 0.05, and 0.10, the result is said to be statistically significant and the null hypothesis is rejected. In this study, the simple linear regression is considered, and it can be written as follows

where y i is the dependent variable and x i is the independent variable. ε i is the error term which is assumed to be independent and identically distributed ( iid ) normal distribution. Thus, the statistical test in this study is based on this normality assumption. To investigate the impact of the variable x i on y i , one needs to examine whether β equals to zero. Hence the hypothesis in this problem can be set H 0 : β = 0 against the alternative hypothesis H a : β ≠ 0 . Let t be t- statistic criteria, then under the traditional p -value method, it is calculated as

where t ⌢ is the observed t- statistic for testing H 0 : β = 0 , computed by t ⌢ = β ⌢ / var ( β ⌢ ) and Φ ( ⋅ ) denotes the cumulative standard normal distribution function. Then inference regarding H 0 is based on the p -value. For a better understanding of the misuse and misinterpretation of the p -value, let us provide a simulation example in this regard.

As an example, we simulate a situation where ( H 0 : β 1 = 0 ) is true and ( H 0 : β 2 = 0 ) is false. Thus, we consider a linear regression model with two independent variables. The parameter β 0 is omitted as we are only considering the effect of the independent variables on the dependent variable, thus the model becomes

where β 1 = 0 and β 2 = 1 . ε i is assumed to have a normal distribution with mean zero and variance σ 2 , ε i ∼ i i d N ( 0 , σ 2 ) . We simulate 1000 data sets with sample size N  = 20, 100, and 500 using the specification of Eq. 4 and estimate all these data sets to obtain the p -values of β 1 and β 2 . This means that the parameter β 1 is assumed to have an insignificant effect on y i while there is statistical significance β 2 . The results are illustrated in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is CJAS_A_1748180_F0001_OC.jpg

Performance of p -values when H 0 is true (upper panel) and H 0 is false (lower panel).

Let us consider the lower panel of Figure 1 ; it is obvious that, when the null hypothesis ( H 0 : β 2 = 0 ) is false or must be rejected, the result shows that p -values are all less than significant level at 0.01, except in the sample size N = 20 . As we can observe that there is a probability that the null hypothesis is rejected and the results are fine as the bulk of the p -values are well less than 0.01. This enables us to have an accurate inference that we cannot reject the null hypothesis. There are however still chances that p -values fall over that 0.01 criterion. This indicates that the researchers and practitioners still have a chance to have a wrong interpretation (type II error), especially when the small sample size is used in the study.

Then we turn our attention to the parameter β 1 which we know that ( H 0 : β 1 = 0 ) must be accepted. For all our 1,000 data sets significance tests, the spread of p -values is all over the place as a uniform distribution. This indicates that there is a high chance that the null hypothesis is rejected. Therefore, in this example, we can conclude that the decision-making based on p -values will be, more or less, arbitrary and the conclusion is made imprecise.

Furthermore, we also observe that the p -values in this simulation study do exhibit no dependence on the sample size when ( H 0 : β 2 = 0 ) is true or must be accepted (upper panel) but high dependence on sample size when ( H 0 : β 2 = 0 ) is false or must be rejected (lower panel). This illustrates that the probability of rejecting ( H 0 : β 2 = 0 ) , when ( H 1 : β 2 ≠ 0 ) is true depends on N, whereas the probability of rejecting the null hypothesis, when ( H 0 : β 2 = 0 ) is true (type I error), does not depend on N .

2.2. Bayes factor

As it is one of the powerful tools for making a statistical test, the Bayes factor is widely accepted as a valuable alternative to the p -value approach. Stern [ 21 ] mentioned that the ‘Bayes factor has a significant advantage over the p -value as it can address the likelihood of the observed data for each hypothesis and thereby treating the two symmetrically’. This is to say it is more realistic as it provides inferences and compares hypotheses for the given data we analyze. In this study, we focus on the Bayes factor and consider a point null hypothesis H 0 : β 0 = 0 and additional priors Pr ( H 0 ) = π and Pr ( H 1 ) = 1 − Pr ( H 0 ) . In the Bayesian point of view, it is possible to compute the probability of a hypothesis conditionally on observed data in terms of the posterior; an appropriate statistic for comparing hypotheses is the posterior odds:

in which the ratio Pr ( y , x | H 0 ) / Pr ( y , x | H 1 ) = B F 01 is called the Bayes factor of H 0 relative to H 1 . This Bayes factor can be computed formally as

where Pr ( H 0 | y , x ) = ∫ H 0 f ( y , x | β ) π ( β ) d β / f ( y , x ) . f ( y , x | β ) is the density function (likelihood function) and π ( β ) is prior density of β . Pr ( H 0 | y , x ) and Pr ( H 1 | y , x ) are the posterior distribution under the null and the alternative hypothesis, respectively. If the prior probability Pr ( H 0 ) = Pr ( H 1 ) , Bayes factor becomes the likelihood ratio of H 0 relative to H 1 . This ratio is termed the Bayes factor [ 12 ]. Thus, the Bayes factor provides a direct quantitative measure of whether the data have increased or decreased the odds of H 0 . As we know, the Bayesian approach consists of the likelihood of the data and the prior distribution of the parameter. The problem is what the appropriate prior is. In Eq. 6, the prior probabilities Pr ( H 0 ) and Pr ( H 1 ) are included, but when no information about prior distributions are available, the approach of Minimum Bayes Factor (MBF) can be used. Since min B F 01 ≤ 1 the Bayes factors lie in the same range as p -values. Another way to compute this MBF is introduced by Edwards, Lindman, and Savage [ 5 ]. They mentioned that the Bayes factor (Eq. 6) is based on the observed data, and they also suggested that this MBF can be computed easily by treating p -value ( p ) as that observed data, thus

The other option for obtaining the MBF is to back-transform p to the underlying test statistic t , which was used to calculate p -value (see, Eq. 3). Therefore, MBF conditional on t -statistic can be computed by

under the assumption that t = t ( p ) is one-to-one transformation. If this transformation does not hold, the p-based Bayes factor (Eq. 7) is preferred, since it is directly computed by the p -value. In this study, we focus on the evidence against a simple null hypothesis provided by MBF, as it is easy to compare with a p -value. To compute the MBF, we have to minimize the test-based Bayes factor based on Eq. 7 or Eq. 8. Thus, the MBF can be computed by

where f ( ⋅ | β ⌢ , H 1 ) is the maximum density function or likelihood of the optimal β ⌢ . The minimum Bayes factor is the smallest possible Bayes factor that can be obtained for a p -value in a certain class of distributions considered under the alternative [ 9 ].

Several methods for computing the MBF have been proposed since the pioneering work of Edwards et al . [ 5 ]. In this study, we also mention four methods as the tools for computing MBF with an emphasis on two-sided p -based and test-based Bayes factor. The formulas of these methods are provided in Table 1 .

Formula
Edwards . [ ]
Goodman [ ]
Vovk [ ] and Sellke . [ ]
Sellke . [ ]

However, the interpretation of MBF is still different from the p -value approach. The transformation of p -value to MBF is called calibration (but it is not just a change of scale, like converting from Fahrenheit degrees to Celsius degrees). By considering MBF, we are in a different conceptual framework. The categorization of the Bayes factor is provided in Table 2 . [ 9 ].

Minimum Interpretation
1–1/3Weak evidence for
1/3–1/10Moderate evidence for
1/10–1/30Substantial evidence for
1/30–1/100Strong evidence for
1/100–1/300Very strong evidence for
 < 1/300Decisive evidence for

2.3. Plausibility - based belief function

Now, let us consider H 0 : β = 0 in this method. As we mentioned before, we can measure the plausibility β = 0 using the belief function. Denoeux [ 4 ] justified the belief function B e l y Θ on Θ is built from the likelihood function. Thus, we can use the normal likelihood to quantify the plausibility of H 0 : β = 0 providing its value between zero and one as the same range as that of the p -value. The plausibility H 0 is given by the contour function

for any hypothesis H ⊆ Θ , where p l y ( β ) is the relative likelihood L ( β ) / L ( β ⌢ ) . β ⌢ is the parameter value that maximizes the likelihood function. Clearly, the plausibility H 0 is rescaled to the interval [0, 1]. Thus under the normality assumption, β ⌢ is the value that maximizes the likelihood function.

When we take the derivative of the log likelihood with respect to β , we obtain

Consequently, we can estimate the P l y Θ ( H 0 : β = 0 ) as

This method can be viewed as the extension of the minimum Bayes factor approach as the plausibility is computed as the ratio of the relative likelihood. However, it transforms the value of β instead of p -value and t -statistic. Thus, it can be said that the plausibility is directly computed from any β . Thus, it can be viewed as an alternative method to the p -value.

Let us consider the same Example 1. We illustrate the calculation of P l y Θ ( H 0 : β = 0 ) . In this example, we simulate one data set with a sample size N = 50 . To generate this data, we set the seed of R software‘s random number generator as 1, set.seed(1) and the estimated results are provided in Table 3 and Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is CJAS_A_1748180_F0002_OC.jpg

Marginal contour functions for the parameters β 1 and β 2 (based on one simulated dataset, N = 50). The vertical red line is the P l y Θ ( H 0 : β j = 0 ) . j = 1 , 2 .

ParameterTrue valueestimate | -statistic| -value
0−0.02030.12380.16400.87000.9860
11.00450.13247.58600.00000.0000

Table 3 provides the estimated parameters from Maximum likelihood estimator (MLE), together with the standard error of the parameters (Column 3), absolute t -statistics (Column 4), p -values (Column 5), and plausibilities P l y Θ ( H 0 : β j = 0 ) . j = 1 , 2 . We can observe that the p -values and the plausibilities provide similar results as they report that both the p -value and the plausibility of H 0 : β 2 = 0 are zero, indicating that β 2 is significantly different from zero. Likewise, both methods also give the same interpretation that the parameter β 1 is insignificant as both the p -value and the plausibility are higher than 0.01, 0.05 and 0.10. However, it is interesting to have a different degree between a p -value of H 0 : β 1 = 0 and plausibility of P l y Θ ( H 0 : β 1 = 0 ) . We find that P l y Θ ( H 0 : β 1 = 0 ) is 0.9860 while the p -value is 0.8700. It can be said that p -value states the amount of evidence for accepting H 1 as 0.8700 / 0.9860 = 0 .8823 times as less as the plausibility-based belief function does. If the P l y Θ ( H 0 : β 1 = 0 ) is true, we can say that the p -value underestimates the true probability. The comparison of the plausibility-based belief function and the p -value is further discussed in Section 3 .

3. Simulation experiment

Several alternative methods for making an inference are introduced in this study. If we need a statistical test, which one is the most preferable? and how do we compare the different approaches? To answer these questions, in this section, the experimental study is conducted using the simulated data. For comparison, we consider the cases where, after the tests, we can find out the truth. So, we simulate the data which we have already known the correct answer to the statistical test.

To further illustrate, we consider an experiment to make comparisons directly among p -value, Bayes factor and plausibility approaches under the linear regression context. We start with the following data generating process,

where β 1 = 3 and β 2 = 0 so that there is only a significant effect of x 1 i on y i . ε i , x 1 i , and x 2 i are generated from a normal distribution with mean zero and variance one. Six different sample sizes are simulated consisting of N = 10, N = 20, N = 50, N = 100, N = 200, and N = 500. 1,000 data sets are simulated for each sample size. Simulations were generated using random seeds to simplify replication. To compare the performance of each method, this study proposed the use of the percentage of incorrect inferences as the measure. For p -value, we use the conventional statistical inference, in which the p -value is equal to or lower than thresholds namely 0.10, 0.05, and 0.01, to make a decision about the null hypothesis. Likewise, the plausibility-based belief function is interpreted in the same way as the p -value. On the other hand, in the case of the minimum Bayes factor approach, the interpretation is different from the first two methods as we make the decision upon the MBF following the Held and Ott [ 9 ] labeled intervals as presented in Table 2 . Our interest is to see whether these methods will reveal any non-significant outcome when the null is false and reveal the significant outcome when the null is true. The results of the method comparison are provided in the following Figures and Tables.

The p -value and the plausibility are reported in Tables ​ Tables4 4 and ​ and5, 5 , respectively. These two approaches provide a similar interpretation where their values less than 0.10, 0.05, and 0.01 are said to be significant. While, the MBF results, reported in Tables 6–9 , provide another interpretation perspective.

  -value approach
N = 10N = 20N = 50N = 100N = 200N = 500
-value <0.0199.7100100100100100
-value <0.0599.9100100100100100
-value <0.10100100100100100100
 
-value < 0.012.11.92.21.11.31.2
-value < 0.058.77.75.65.24.55.7
-value < 0.1013.51310.29.510.310.3
-value < 0.10) .
 Plausibility approach
N = 10N = 20N = 50N = 100N = 200N = 500
plausibility  100100100100100100
plausibility  100100100100100100
plausibility  100100100100100100
plausibility 
      
plausibility  5.11.71.10.50.40.3
plausibility  11.55.33.41.91.82.3
plausibility  16.28.95.13.433.8
plausibility 
N = 10N = 20N = 50N = 100N = 200N = 500
1–1/3Weak evidence for 000000
1/3–1/10Moderate evidence for 0.100000
1/10–1/30Substantial evidence for 0.200000
1/30–1/100Strong evidence for 0.500000
1/100–1/300Very strong evidence for 0.900000
 < 1/300Decisive evidence for 98.3100100100100100
  .
 
1–1/3Weak evidence for 81.782.785.886.785.884.8
1/3–1/10Moderate evidence for 13.31211.310.411.311.4
1/10–1/30Substantial evidence for 2.93.51.61.91.62.9
1/30–1/100Strong evidence for 11.410.610.6
1/100–1/300Very strong evidence for 0.30.20.20.30.20.1
 < 1/300Decisive evidence for 0.80.20.10.10.10.2
  .
N = 10N = 20N = 50N = 100N = 200N = 500
1–1/3Weak evidence for 000000
1/3–1/10Moderate evidence for 0.600000
1/10–1/30Substantial evidence for 0.500000
1/30–1/100Strong evidence for 0.800000
1/100–1/300Very strong evidence for 1.000000
 < 1/300Decisive evidence for 97.1100100100100100
  .
 
1–1/3Weak evidence for 95.29595.697.197.296.7
1/3–1/10Moderate evidence for 2.94.032.11.72.7
1/10–1/30Substantial evidence for 10.71.20.40.80.4
1/30–1/100Strong evidence for 0.40.10.20.40.30.1
1/100–1/300Very strong evidence for 0.10.10000.1
 < 1/300Decisive evidence for 0.40.10000
  .
N = 10N = 20N = 50N = 100N = 200N = 500
1–1/3Weak evidence for 0.100000
1/3–1/10Moderate evidence for 0.300000
1/10–1/30Substantial evidence for 0.500000
1/30–1/100Strong evidence for 0.900000
1/100–1/300Very strong evidence for 1.000000
 < 1/300Decisive evidence for 97.2100100100100100
  .
 
1–1/3Weak evidence for 94.59495.196.696.795.4
1/3–1/10Moderate evidence for 3.54.43.12.62.03.8
1/10–1/30Substantial evidence for 1.11.31.60.41.00.6
1/30–1/100Strong evidence for 0.30.10.20.40.30.1
1/100–1/300Very strong evidence for 0.10.10000.1
 < 1/300Decisive evidence for 0.50.10000
  .
N = 10N = 20N = 50N = 100N = 200N = 500
1–1/3Weak evidence for 000000
1/3–1/10Moderate evidence for 0.100000
1/10–1/30Substantial evidence for 000000
1/30–1/100Strong evidence for 0.700000
1/100–1/300Very strong evidence for 0.500000
 < 1/300Decisive evidence for 98.7100100100100100
  .
 
1–1/3Weak evidence for 82.683.486.587.486.485.9
1/3–1/10Moderate evidence for 11.610.58.69.210.29.5
1/10–1/30Substantial evidence for 3.44.12.41.92.02.9
1/30–1/100Strong evidence for 0.81.41.61.00.61.3
1/100–1/300Very strong evidence for 0.80.30.70.10.60.2
 < 1/300Decisive evidence for 0.80.30.20.40.20.2
  .

We provide the percentage of incorrect inferences to make the comparison of these three approaches. However, it is quite difficult to compare these approaches as the interpretation of significant results are different. Hence, in this experiment, decisive evidence for H 1 : β ≠ 0 is considered as an acceptable decision favoring the alternative hypothesis while weak, moderate, substantial, strong and very strong evidence are considered as an acceptable decision favoring the null hypothesis H 0 : β = 0 . Likewise, p -value and plausibility are also used cut-offs, 0.1, .05, and 0.01, to make decisions about the null hypothesis. Therefore, if p -value and plausibility are less than 0.1, it is considered to reject the null hypothesis, otherwise, accept.

We begin our experiment with the case H 1 : β 1 ≠ 0 must be accepted as we set β 1 = 3 in Eq. (16). As we can observe in Tables ​ Tables4 4 – 9 , the plausibility calibration produces the lowest incorrect inferences compared to the p -value and the four MBFs, when N = 10. the percentage of incorrect inferences of both methods is 0%. However, if we consider the more restrictive decision point for both methods, say 0.05 and 0.01 criteria, we can observe the incorrect inferences of plausibility method are 0% thus favoring the H 1 : β 1 ≠ 0 , whereas the p -values at 0.05 and 0.01 criteria accept this alternative hypothesis 0.3% and 0.1%, respectively. This indicates that there is little chance that the p -value is misleading. Furthermore, using the 0.10 criterion, it is evident that the percentage of incorrect inferences of plausibility and p -value are 0%, indicating the high reliability of plausibility test when H 0 : β 1 = 0 must be rejected. In the cases of MBF, among the four different methods, we find that MBF of Sellke et al . [ 19 ] performs well in this experiment as the percentage of the incorrect inferences is only 1.3%. Yet, this rate is still higher than those of p -value and plausibility methods. However, the results from these three approaches in the samples of N > 10 provide the same interpretation as the percentage of incorrect inferences are 0%.

Then, suppose now we consider the case that the null hypothesis H 0 : β 2 = 0 must be rejected. As we can see in Tables ​ Tables4 4 – 9 , the heterogeneous results are provided. To make a clear picture, we summarize the percentage of incorrect inference results in Figure 3 . Different lines indicate different methods. The results show that there is variability in the evidence of testing. Firstly, it can be seen in the right panel of Figure 3 where the percentage of incorrect inferences of all methods seems to decrease when the sample size increases to N = 100, and the constant trend of incorrect inferences maintains after N = 100. Secondly, the Minimum Bayes Factor of Edwards et al . [ 5 ] produces the lowest rate of incorrect inferences.

An external file that holds a picture, illustration, etc.
Object name is CJAS_A_1748180_F0003_OC.jpg

Summary percentage of incorrect inference results.

For a closer look at the behavior of Minimum Bayes Factors of Edwards et al . [ 5 ] in Table 7 , the percentage that this method finds moderate, substantial, strong, very strong, and decisive is always less than or equal to 5%. Meanwhile, from the p -value approach ( Table 4 ), the percentage of incorrect inferences is ranged between 9.5–13.5%. This indicates that p -value states the amount of evidence against H 0 : β 2 = 0 as approximately 2.3–3.7 times (computed by the percentage of incorrect inferences of p -value divided by that of MBF) as much as the MBF of Edwards et al . [ 5 ] does. In other words, the p -value exaggerates statistical significance around 2–4 times as much as the MBF. Therefore, we can confidently argue that the conclusion derived from p -values is less accurate as a measure of the strength of evidence against H 0 : β 2 = 0 .

Although the plausibility approach is not performing well up to producing the lowest incorrect inferences in this case, its rate of incorrectness is still lower than the p -value approach for all sample sizes, except for N = 10. Table 4 reveals that the percentage of incorrect inferences from the p -value favoring H 0 : β 2 = 0 decreases with the growth of the sample size. This indicates that plausibility approach is more accurate for making decisions about the null hypothesis than the traditionally used p -value thresholds. Therefore, from an empirical or applied point of view, we could consider this alternative as a useful tool for researchers to avoid false discovery claims.

In addition, we also plot the boxplot for displaying the full range of variation of p -values, plausibility and MBFs, obtained from the same simulation results from Tables ​ Tables4 4 – 9 , in Figures 4–6 , respectively. In all panels, the y-axis plots the probability values obtained from different methods and different sample sizes.

An external file that holds a picture, illustration, etc.
Object name is CJAS_A_1748180_F0004_OC.jpg

The full range of variation (from min to max) of p -values.

An external file that holds a picture, illustration, etc.
Object name is CJAS_A_1748180_F0005_OC.jpg

The full range of variation (from min to max) of plausibility (PL).

An external file that holds a picture, illustration, etc.
Object name is CJAS_A_1748180_F0006_OC.jpg

The full range of variation (from min to max) of MBF

Considering the case that the null hypothesis H 0 : β 1 = 0 must be rejected (the true β 1 is 3). In other words, there is strong evidence favoring the alternative hypothesis. As shown in the left panel of Figures ​ Figures4 4 – 6 , there is a small variation in the probability values for all methods. When the sample size is greater than 10, all methods show the evidence supporting the alternative hypothesis. However, in the case of small sample size, say N = 10, there is a number of times that our testing methods lead to misinterpretation. Among 1,000 simulated datasets, we can see that p -value favors H 1 : β 1 ≠ 0 one time when using the 0.05 criterion while the plausibility gives no evidence of supporting the alternative hypothesis. For the four MBFs, similar results are shown. The variation of MBFs is also similar to those of the p -value and plausibility approaches, except for N = 10. This indicates that the power of any test depends on the sample size. If the sample size is large enough, the test will be more reliable, especially when the null hypothesis H 0 : β 1 = 0 must be rejected. However, there is no evidence supporting the reliability of the test for the case of the null hypothesis that must be accepted.

By using the decisive evidence for H 1 : β 1 ≠ 0 ( M B F 01 < 1 / 300 ) as the criterion, the number of times that MBFs produce the value greater than this criterion is relatively high compared to the p -value and plausibility approaches ( Figure 6 ). These results indicate that there could be a misinterpretation of the hypothesis test when the number of observations is low. However, if we use the weak evidence for H 1 : β 1 ≠ 0 , ( 1 / 3 < M B F 01 ≤ 1 ) , there is no evidence that MBF methods, except MBFs of Edwards et al . [ 5 ]; and Vovk [ 22 ] and Sellke et al . [ 19 ], fall in the range of this criterion. This indicates that MBFs of Edwards et al . [ 5 ]; and Vovk [ 22 ] and Sellke et al . [ 19 ] have a small chance of providing the wrong interpretation. This result corresponds to the results provided in Tables ​ Tables7 7 and ​ and8 8 .

Furthermore, the variation of p -values, plausibility, and MBFs in the right panel of Figures ​ Figures4 4 – 6 also provide another test of ( β 2 ) . Remind that the null hypothesis must be accepted or the null hypothesis is correct, β 2 = 0 . We can see that there exhibits a relatively high variation compared to the case of the null hypothesis is incorrect (as reflected by the higher heights of the boxes). Under this test, we notice that MBFs of Edwards et al . [ 5 ]; and Vovk [ 22 ] and Sellke et al . [ 19 ], as illustrated in panels (b) and (c) in Figure 6 , show the small variability as the height of the box plots is short. We also observe that the median of these two MBFs exceeds 0.9 indicating that there is weak evidence for H 1 . However, we observe that there are some outliers located below ( M B F 01 < 1 / 300 ) , indicating that there is a small chance that the MBF favors the decisive evidence for H 1 . Then, let us consider the MBFs of Goodman [ 6 ] and Sellke et al . [ 19 ]. In all sample sizes, the median MBFs are around 0.8 and 0.9, respectively. This indicates that these two methods seem to favor the weak evidence of an alternative hypothesis as well. Also, the outliers of these two MBFs are not present, meaning the high chance of favoring the decisive evidence for H 1 . Therefore, we can conclude that MBFs of Edwards et al . [ 5 ]; and Vovk [ 22 ] and Sellke et al . [ 19 ] perform well and provide more accurate testing for the case the null hypothesis must be accepted. This result corresponds to the results provided in Tables ​ Tables7 7 and ​ and8 8 .

In a nutshell, these simulation results provide evidence of the high performance of plausibility approach when the null hypothesis is correct and must be accepted. Meanwhile, the MBFs of Edwards et al . [ 5 ]; and Vovk [ 22 ] and Sellke et al . [ 19 ] provide more reliable results compared to all other methods when the null hypothesis is incorrect. Yet. there is no evidence of 100% correct inferences under this case. This indicates that decision-making based on these approaches will be, more or less, arbitrary.

4. Illustrated example

Finally, we make a comparison among MBFs, plausibility and p -value using a real application on the impact of economic variables on energy price in Spain. We use a dataset from the R package ‘MSwM’ [ 18 ] covering Price of the energy in Spain ( P t ) and other economic variables namely Oil price ( O t ) , Gas price ( G t ) , Coal price ( C t ) , Exchange rate between Dollar-Euro ( E x t ) , Ibex 35 index divided by one thousand ( I b e x t ) and Daily demand of energy ( D t ) . The data were collected from the Spanish Market Operator of Energy (OMEL), the Bank of Spain and the U.S. Energy Information Administration, covering, January 1, 2002, to October 31, 2008. We consider the following linear regression model

The application results of the three statistical tests are provided in Table 10 , where we list each covariate corresponding coefficient, the p -value, plausibility and four types of the Minimum Bayes factor.

 Coefficient -valuePlausibilityGoodmanEdwards .Vovk and Sellke .Sellke .
−9.12530.00000.00000.00000.00000.00000.0000
0.02840.00000.00000.00000.00000.00000.0000
0.04300.00000.00000.00000.00000.00000.0000
−0.00210.28000.00000.55750.99360.96860.6423
6.04030.00000.00000.00000.00000.00000.0000
−0.15900.00000.00000.00000.00000.00000.0000
0.00890.00000.00000.00000.00000.00000.0000

Considering the test of β 3 in Table 10 , we find strong evidence that six out of seven coefficients are favoring the alternative hypothesis. All three approaches provide the same interpretation of these coefficients. The p -value is less than 0.01, corresponding to the decisive evidence for the alternative hypothesis of four MBF methods. However, there exists a contradictory result in the interpretation of coefficient β 3 as different interpretation is made from each method. The p -value confirms the insignificant effect of coal price on the energy price of Spain, but the plausibility gives a significant result. Then from the MBF results, we can see all four MBF methods are categorized as weak evidence for the alternative hypothesis. These results seem to correspond to our simulation experiment in Section 3 which shows that the rate of incorrect inferences when the null hypothesis is incorrect is relatively high compared to the case when the null is correct. This application’s results suggest that the researchers need to be careful in interpreting a statistical result and various approaches should be used to crosscheck the result of one another.

5. Conclusion

In this paper, we highlight some of the misconceptions about the p -value and illustrate its performance using some simulated experiments and introduce two alternative approaches for the p -value namely the plausibility and the Minimum Bayes factor (MBF) as well as find the evidence against a simple null hypothesis under the linear regression context. MBF is an alternative to the p -value offered by the Bayesian approach, which relies solely on the observed samples to provide direct probability statements about the parameters of interest. In the case of plausibility approach, it can be viewed as the extension of the Minimum Bayes factor approach as the plausibility is computed as the ratio of the relative likelihood. However, it transforms the value of the parameter instead of p -value and t -statistic. Thus, it can be said that the plausibility is directly computed from any parameters.

The values of MBF and plausibility lie in the same range as p -value and this fact facilitates us to make the comparison. While the plausibility provides similar interpretation as p -value where 0.1, 0.05, 0.01 are given as the cut-offs or decision criteria, the MBF is interpreted following the Goodman [ 6 ] labeled intervals. As a result, a MBF between 1–1/3 is considered weak evidence for H 1 ; while 1/3–1/10 corresponds to moderate evidence, 1/10–1/30 substantial evidence, 1/30–1/100 strong evidence, 1/100–1/300 very strong evidence, and < 1/300 decisive evidence. To make a comparison of these three approaches, we conduct a simulation study to compare the incorrect inferences of each approach.

Our results show that the plausibility approach is more accurate for making decisions about the null hypothesis than the traditionally used p -value when the null hypothesis is true and must be accepted. However, the MBFs of Edwards et al . [ 5 ]; and Vovk [ 22 ] and Sellke et al . [ 19 ] provide more reliable results compared to all other methods when the null hypothesis is false or must be rejected. Based on our results, there is no evidence of 100% correct inferences under this case. This indicates that decision-making based on these approaches will be, more or less, arbitrary, when the null hypothesis is incorrect. As we mention in the introduction, it is too dangerous to have only binary decisions. Hence, the decision making in favoring each hypothesis needs to consider the whole categorization of MBF in order to avoid this strong inference. In addition, we could consider these alternatives as useful tools for researchers to avoid false discovery claims based on the p -value.

Nevertheless, our discussion should not be taken as a statement for researchers and practitioners to completely avoid p -value. Rather, we should investigate some misconceptions about the p -value and find alternative methods that have better statistical interpretations and properties. Finally, we note that the research studies involve much more than the statistical interpretation stage. The researcher in this area should carefully make the interpretation of the results. Instead of banning or rejecting p -value all at once, we suggest considering all these statistical tests for achieving the reliable results of your work. Furthermore, we should consider non-statistical evidence, such as theory and real evidence, in supporting decision making. This will help us gain more reliable results.

Acknowledgments

The authors would like to thank the four anonymous reviewers, the editor, and Prof. Hung T. Nguyen for his helpful comments and suggestions. The financial support of this work is provided by Center of Excellence in Econometrics, Chiang Mai University.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Changing null hypothesis in linear regression

I have some data that is highly correlated. If I run a linear regression I get a regression line with a slope close to one (= 0.93). What I'd like to do is test if this slope is significantly different from 1.0. My expectation is that it is not. In other words, I'd like to change the null hypothesis of the linear regression from a slope of zero to a slope of one. Is this a sensible approach? I'd also really appreciate it you could include some R code in your answer so I could implement this method (or a better one you suggest!). Thanks.

  • correlation
  • hypothesis-testing

Nick Crawford's user avatar

5 Answers 5

GaBorgulya's user avatar

  • $\begingroup$ Thank you! I just couldn't figure out how to change the lm command. $\endgroup$ –  Nick Crawford Commented Apr 21, 2011 at 14:35
  • $\begingroup$ Then is it exactly the same "lm(y-x ~ x)" than "lm(y ~ x, offset= 1.00*x)" (or without that 1.00) ? Wouldn't that substraction be problem with the assumptions for least squares or with collinearity? I want to use it for a logistic regression with random effects glmer(....). It would be great to have a simple but correct method to get the p-values. $\endgroup$ –  skan Commented May 19, 2017 at 15:43
  • $\begingroup$ Here stats.stackexchange.com/questions/111559/… Matifou says this method is worse than using Wald the test. $\endgroup$ –  skan Commented May 19, 2017 at 15:45

Your hypothesis can be expressed as $R\beta=r$ where $\beta$ is your regression coefficients and $R$ is restriction matrix with $r$ the restrictions. If our model is

$$y=\beta_0+\beta_1x+u$$

then for hypothesis $\beta_1=0$, $R=[0,1]$ and $r=1$.

For these type of hypotheses you can use linearHypothesis function from package car :

mpiktas's user avatar

  • $\begingroup$ Can this be used for a one-sided test? $\endgroup$ –  user66430 Commented Feb 7, 2017 at 15:17

It seems you're still trying to reject a null hypothesis. There are loads of problems with that, not the least of which is that it's possible that you don't have enough power to see that you're different from 1. It sounds like you don't care that the slope is 0.07 different from 1. But what if you can't really tell? What if you're actually estimating a slope that varies wildly and may actually be quite far from 1 with something like a confidence interval of ±0.4. Your best tactic here is not changing the null hypothesis but actually speaking reasonably about an interval estimate. If you apply the command confint() to your model you can get a 95% confidence interval around your slope. Then you can use this to discuss the slope you did get. If 1 is within the confidence interval you can state that it is within the range of values you believe likely to contain the true value. But more importantly you can also state what that range of values is.

John's user avatar

The point of testing is that you want to reject your null hypothesis, not confirm it. The fact that there is no significant difference, is in no way a proof of the absence of a significant difference. For that, you'll have to define what effect size you deem reasonable to reject the null.

Testing whether your slope is significantly different from 1 is not that difficult, you just test whether the difference $slope - 1$ differs significantly from zero. By hand this would be something like :

Now you should be aware of the fact that the effect size for which a difference becomes significant, is

provided that we have a decent estimator of the standard error on the slope. Hence, if you decide that a significant difference should only be detected from 0.1, you can calculate the necessary DF as follows :

Mind you, this is pretty dependent on the estimate of the seslope. To get a better estimate on seslope, you could do a resampling of your data. A naive way would be :

putting seslope2 in the optimization function, returns :

All this will tell you that your dataset will return a significant result faster than you deem necessary, and that you only need 7 degrees of freedom (in this case 9 observations) if you want to be sure that non-significant means what you want it means.

Joris Meys's user avatar

You can simply not make probability or likelihood statements about the parameter using a confidence interval, this is a Bayesian paradigm.

What John is saying is confusing because it there is an equivalence between CIs and Pvalues, so at a 5%, saying that your CI includes 1 is equivalent to saying that Pval>0.05.

linearHypothesis allows you to test restrictions different from the standard beta=0

Stefano Nembrini's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged regression correlation hypothesis-testing or ask your own question .

  • Featured on Meta
  • Join Stack Overflow’s CEO and me for the first Stack IRL Community Event in...
  • User activation: Learnings and opportunities

Hot Network Questions

  • Why were there so many OSes that had the name "DOS" in them?
  • What film is it where the antagonist uses an expandable triple knife?
  • When does a finite group have finitely many indecomposable representations?
  • Why would the GPL be viral, while EUPL isn't, according to the EUPL authors?
  • Overstaying knowing I have a new Schengen visa
  • Can you spell memento as mement?
  • How much could gravity increase before a military tank is crushed
  • Book that features clones used for retirement
  • Identify this 6 pin IC
  • How can I analyze the anatomy of a humanoid species to create sounds for their language?
  • Is the white man at the other side of the Joliba river a historically identifiable person?
  • LaTeX labels propositions as Theorems in text instead of Propositions
  • Best memory / storage solution for high read / write throughput application(s)?
  • What is a natural-sounding verb form for the word dorveille?
  • Assumptions of Linear Regression (homoscedasticity and normality of residuals)
  • What is the unit for 'magnitude' in terms of the Isophotal diameter of a galaxy?
  • Paying a parking fine when I don't trust the recipient
  • Does any row of Pascal's triangle contain a Pythagorean triple?
  • Was Willy Wonka correct when he accused Charlie of stealing Fizzy Lifting Drinks?
  • Engaging students in the beauty of mathematics
  • Use of "them" in "…she fights for the rights and causes I believe need a warrior to champion them" by Taylor Swift
  • Is this grammartically correct sentence "這藥物讓你每天都是良好的狀態"?
  • Is it feasible to create an online platform to effectively teach college-level math (abstract algebra, real analysis, etc.)?
  • Correct syntax to add WMTS to Leaflet map

null hypothesis linear regression

World Bank Blogs Logo

Regression-based joint orthogonality tests of balance can over-reject: so what should you do?

David mckenzie.

Development Impact logo

One of the shortest posts I wrote for the blog was on a joint test of orthogonality when testing for balance between treatment and control groups. Given a set of k covariates X1, X2, X3, …., Xk, this involves running the regression:

Treatment = a + b1X1+b2X2+b3X3+…+bkXk + u

And then testing the joint hypothesis b1=b2=b3=…=bk=0. This could be done by running the equation as a linear regression and using an F-test, or running it as a probit and using a chi-squared test. If the experiment is stratified, you might want to do this conditioning on randomization strata , especially if the probability of assignment to treatment varies across strata , and if the experiment is clustered, then the standard errors should be clustered. There are questions about whether it is desirable at all to do such tests when you know for sure the experiment was correctly randomized, but let’s assume you want to do such a test, perhaps to show the sample is still balanced after attrition, or that a randomization done in the field was done correctly.

One of the folk wisdoms is that researchers sometimes are surprised to find this test rejecting the null hypothesis of joint orthogonality, especially when they have a lot of variables in their balance table, or when they have multiple treatments and estimate a multinomial logit. A new paper by Jason Kerwin, Nada Rostom and Olivier Sterck shows this via simulations, and offers a solution.

Joint orthogonality tests based on standard robust standard errors over-reject the null, especially when k is large relative to n

Kerwin et al. look at both joint orthogonality tests, as well as the practice of doing pairwise t-tests (or group F-tests with multiple treatments) and doing some sort of “vote counting” where e.g. researchers look to see whether more than 10 percent of the tests reject the null at the 10% level. They run simulations for two data generating processes they specify (one using individual level randomization, and one clustered), and with data from two published experiments (one with k=33 and n=698 and individual level randomization, and one with k=10 and clustered randomization with 1016 units in 148 clusters).

They find that standard joint orthogonality tests with “robust” standard errors (HC1, HC2, or HC3) over-reject the null in their simulations:

·       When n=500 and k=50, in one data generating process the test rejects the null at the 10% level approximately 50% of the time! That is, in half the cases researchers would conclude that a truly randomized experiment resulted in imbalance between treatment and control.

·       Things look a lot better if n is large relative to k. With n=5000, size is around the correct 10% even for k=50 or 60; when k=10, size looks pretty good for n=500 or more.

·       The issue is not surprisingly worse in clustered experiments, where the effective degrees of freedom are lower.

What is the problem?

The problem is that standard Eicker-White robust standard error asymptotics do not hold when the number of covariates are large relative to the sample size . Cattaneo et al. (2018) provide discussion and proofs, and suggest that the HC3 estimator can be conservative and used for inference – although Kerwin et al. still find overrejection using HC3 in their simulations. In addition to the number of covariates, leverage matters a lot – and having a lot of covariates and small sample can increase leverage.

So what are the solutions?

The solution Kerwin et al. propose is to use omnibus tests with randomization inference instead of regression standard errors. They show this gives the correct size in their simulations, works with clustering, and also works with multiple treatments. They show this makes a difference in practice to the published papers they relook at: in one, the F-test p-value from HC1 clustered standard errors is p=0.088, whereas it would be 0.278 using RI standard errors; and similarly a regression clustered standard error p-value of 0.068 becomes 0.186 using RI standard errors – so using randomization inference makes the published papers claim of balanced randomization more credible (for once a methods paper that strengthens existing results!).

My other suggestion is for researchers to also think carefully about how many variables they are putting in their balance tables in the first place. We are most concerned about imbalances in variables that will be highly correlated with outcomes of interest – but also often like to use this balance table/Table 1 to provide some summary statistics that help provide context and details of the sample. The latter is a reason for more controls, but keeping to 10-20 controls rather than 30-50 seems plenty to me in most cases – and also will help with journals having restrictions on how many rows your tables can have. Pre-registering which variables will go into this test then helps guard against selective reporting. There are also some parallels to the use of methods such as pdslasso to choose controls – I have a new working paper coming out soon on using this method with field experiments, and one of the lessons there is putting in too many variables can result in a higher chance of not selecting the ones that matter.

Another practical note

Another practical note with these tests is that it can be common to have a few missing values for some baseline covariates – e.g. age might be missing for 3 cases, gender for one, education for a few others, etc. This does not present such a problem for pairwise t-tests (where you are then testing treatment and control are balanced for the subsample that have data on a particular variable). But for a joint orthogonality F-test, the regression would only then be estimated for the subsample with no missing data, which could be a lot lower than n. Researchers then need to think about dummying out the missing values before running this test – but then this can result in a whole lot more (often highly correlated) covariates in the form of dummy variables for these missing values. Another reason to be judicious on which variables go into the omnibus test and focusing on a subset of variables without many missing values.

Get updates from Development Impact

Thank you for choosing to be part of the Development Impact community!

Your subscription is now active. The latest blog posts and blog-related announcements will be delivered directly to your email inbox. You may unsubscribe at any time.

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

  • Share on mail
  • comments added

Random forest analysis and lasso regression outperform traditional methods in identifying missing data auxiliary variables when the MAR mechanism is nonlinear (p.s. Stop using Little’s MCAR test)

  • Original Manuscript
  • Published: 09 September 2024

Cite this article

null hypothesis linear regression

  • Timothy Hayes   ORCID: orcid.org/0000-0001-7530-0241 1 ,
  • Amanda N. Baraldi   ORCID: orcid.org/0000-0003-1978-4768 2 &
  • Stefany Coxe   ORCID: orcid.org/0000-0002-2203-0775 3  

7 Altmetric

The selection of auxiliary variables is an important first step in appropriately implementing missing data methods such as full information maximum likelihood (FIML) estimation or multiple imputation. However, practical guidelines and statistical tests for selecting useful auxiliary variables are somewhat lacking, leading to potentially biased estimates. We propose the use of random forest analysis and lasso regression as alternative methods to select auxiliary variables, particularly in situations in which the missing data pattern is nonlinear or otherwise complex (i.e., interactive relationships between variables and missingness). Monte Carlo simulations demonstrate the effectiveness of random forest analysis and lasso regression compared to traditional methods ( t -tests, Little’s MCAR test, logistic regressions), in terms of both selecting auxiliary variables and the performance of said auxiliary variables when incorporated in an analysis with missing data. Both techniques outperformed traditional methods, providing a promising direction for improvement of practical methods for handling missing data in statistical analyses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

null hypothesis linear regression

Similar content being viewed by others

null hypothesis linear regression

An evaluation of methods to handle missing data in the context of latent variable interaction analysis: multiple imputation, maximum likelihood, and random forest algorithm

null hypothesis linear regression

Estimation of logistic regression with covariates missing separately or simultaneously via multiple imputation methods

null hypothesis linear regression

How to choose an approach to handling missing categorical data: (un)expected findings from a simulated statistical experiment

Data availability.

Reiterating the open practices statement above, all simulation files and worked example code are available on an Open Science Framework Repository at:  https://osf.io/q84ts/ .

We note that although extensions of both FIML and multiple imputation have been developed to handle MNAR missing data, we refer throughout the paper to the more widely known and used MAR-based versions of these methods—e.g., invoking FIML estimation under missing data by setting arguments missing = “FIML” and fixed.x = “TRUE” in the lavaan package in R, as in the simulation reported later in the paper.

Note that the goal of satisfying the MAR assumption is aspirational but unverifiable in practice: in real datasets, researchers can never be certain that (a) they have identified true causes, as opposed to correlates, of missing data; (b) they have identified all such causes of missingness and all are measured and available in the dataset; and (c) missing values are not additionally caused by participants’ unseen scores on the variables in question, resulting in an analysis satisfying the MNAR mechanism. In other words, researchers can never be certain that the MAR assumption is (fully) met; rather, researchers can only render MAR more plausible by searching for and including useful auxiliary variables in analysis. In practice, researchers can never distinguish between MAR and MNAR mechanisms, as doing so would require access to participants’ unseen (missing) scores on all variables with missing data.

Our collective experience collaborating with and providing statistical consultation for numerous substantive and applied researchers has led us to the firm conviction that successful convergence of complex multiple imputation models is by no means a foregone conclusion, especially when models incorporate complexities such as those listed above. The definition of “successful convergence” for multiple imputation is crucial to this conclusion. While on the user end one may achieve successful results with no warning message in most software packages, investigation of recommended imputation diagnostics might demonstrate untrustworthy performance (see, e.g., Enders, 2022 ; Hayes & Enders, 2023 ).

Unless the researcher has decisive reasons to believe that the data are MCAR, such as when missing data are caused by a lab computer periodically crashing in a haphazard manner unrelated to participants’ characteristics or when the researcher has used a planned missing data design to purposefully inject MCAR missing data.

Alternatively, the researcher might include all substantive model variables as well, e.g.,

which would allow the researcher to assess whether candidate auxiliary variables \({a}_{1}\) , \({a}_{2}\) , and \({a}_{3}\) predicted missing data above and beyond the variable(s) in the substantive model (i.e., x , smoking attitudes, in the hypothetical example).

Admittedly, this poses no shortcoming when assessing the types of inherently parabolic convex missing mechanisms under specific consideration in the present study, but may hinder generalizations to other, thornier, less orthodox functional forms of the relationship between auxiliary variables and missing data indicators.

Note that this implies that the permutation importance test was conducted using marginal rather than partial variable importance, as described by Strobl et al. (2020). Based on pilot simulations, this procedure performed substantially better than partial variable importance measures. Because our goal here was not a detailed comparison of these options, however, we do not discuss partial importance measures further.

Note that we also ran a set of analyses that included no auxiliary variables and that estimated the model using listwise deletion rather than FIML, using argument missing = “listwise” in lavaan. Because the results of these listwise analyses were identical to those of the “no auxiliary variable” FIML analyses, we opted to conserve space by omitting them from our presentation here.

This can be said of the interactive mechanism here because it was designed to mimic the effects of a convex functional form, despite missing data rates depending on the values of two, rather than just one, auxiliary variables.

Arbuckle, J. N. (1996). Full information estimation in the presence of incomplete data. In Advanced structural equation modeling. (pp. 243–277). Lawrence Erlbaum Associates. Inc.

Berk, R. A. (2009). Statistical learning from a regression perspective . Springer.

Google Scholar  

Breiman, L. (1996). Bagging predictors. Machine Learning, 24 (2), 123–140. https://doi.org/10.1007/BF00058655

Article   Google Scholar  

Breiman, L. (2001). Random Forests. Machine Learning, 45 (1), 5–32. https://doi.org/10.1023/A:1010933404324

Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and Regression Trees . Wadsworth.

Cohen, J., Cohen, P., Aiken, L. S., & West, S. G. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd Ed.). Lawrence Erlbaum Associates, Inc.

Collins, L. M., Schafer, J. L., & Kam, C. M. (2001). A comparison of inclusive and restrictive strategies in modern missing data procedures. Psychological Methods, 6 (4), 330–351. https://doi.org/10.1037/1082-989X.6.4.330

Article   PubMed   Google Scholar  

Debeer, D., Hothorn, T., & Strobl, C. (2021). permimp: Conditional Permutation Importance (R package version 1.0–2). https://CRAN.R-project.org/package=permimp

Debeer, D., & Strobl, C. (2020). Conditional permutation importance revisited. BMC Bioinformatics, 21 (1), 307. https://doi.org/10.1186/s12859-020-03622-2

Article   PubMed   PubMed Central   Google Scholar  

Dixon, W. J. (1988). BMDP statistical software . University of California Press.

Enders, C. K. (2021). Applied missing data analysis (2nd ed.). Manuscript in press at Guilford Press.

Enders, C. K. (2022). Applied missing data analysis (2nd Ed.). The Guilford Press.

Enders, C. K. (2023). Fitting structural equation models with missing data. In Handbook of structural equation modeling (2nd Ed., pp. 223–240). The Guilford Press.

Enders, C. K., Du, H., & Keller, B. T. (2020). A model-based imputation procedure for multilevel regression models with random coefficients, interaction effects, and nonlinear terms. Psychological Methods, 25 (1), 88–112. https://doi.org/10.1037/met0000228

Graham, J. W. (2003). Adding missing-data-relevant variables to FIML-based structural equation models. Structural Equation Modeling: A Multidisciplinary Journal, 10 (1), 80–100. https://doi.org/10.1207/S15328007SEM1001_4

Grund, S., Lüdtke, O., & Robitzsch, A. (2021). Multiple imputation of missing data in multilevel models with the R package mdmb: A flexible sequential modeling approach. Behavior Research Methods . https://doi.org/10.3758/s13428-020-01530-0

Hapfelmeier, A., Hothorn, T., Ulm, K., & Strobl, C. (2014). A new variable importance measure for random forests with missing data. Statistics and Computing, 24 (1), 21–34. https://doi.org/10.1007/s11222-012-9349-1

Hapfelmeier, A., & Ulm, K. (2013). A new variable selection approach using Random Forests. Computational Statistics & Data Analysis, 60 , 50–69. https://doi.org/10.1016/J.CSDA.2012.09.020

Hastie, T., Tibshirani, R., & Friedman, J. H. (2009). The elements of statistical learning . Springer-Verlag.

Book   Google Scholar  

Hayes, T., & Enders, C. K. (2023). Maximum Likelihood and Multiple Imputation Missing Data Handling: How They Work, and How to Make Them Work in Practice. In H. Cooper, A. Panter, D. Rindskopf, K. , Sher, M. Coutanche, & L. McMullen (Eds.), APA Handbook of Research Methods in Psychology (2nd Ed.). American Psychological Association.

Hoerl, A. E., & Kennard, R. W. (1970). Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 12 (1), 55. https://doi.org/10.2307/1267351

Hothorn, T., Hornik, K., & Zeileis, A. (2006). Unbiased Recursive Partitioning: A Conditional Inference Framework. Journal of Computational and Graphical Statistics, 15 (3), 651–674. https://doi.org/10.1198/106186006X133933

IBM Corp. (2022). IBM SPSS Statistics for Macintosh, Version 29.0 . IBM Corp.

James, G., Witten, D., Hastie, T., & Tibshirani, R. (2021). An introduction to statistical learning with applications in R (2nd Ed.). Springer.

Jamshidian, M., & Jalal, S. (2010). Tests of Homoscedasticity, Normality, and Missing Completely at Random for Incomplete Multivariate Data. Psychometrika, 75 (4), 649–674. https://doi.org/10.1007/s11336-010-9175-3

Jamshidian, M., Jalal, S., & Jansen, C. (2014). MissMech : An R Package for Testing Homoscedasticity, Multivariate Normality, and Missing Completely at Random (MCAR). Journal of Statistical Software , 56 (6), 1–31. https://doi.org/10.18637/jss.v056.i06

Jeliĉić, H., Phelps, E., & Lerner, R. M. (2009). Use of missing data methods in longitudinal studies: The persistence of bad practices in developmental psychology. Developmental Psychology, 45 (4), 1195–1199. https://doi.org/10.1037/a0015665

Kim, K. H., & Bentler, P. M. (2002). Tests of homogeneity of means and covariance matrices for multivariate incomplete data. Psychometrika, 67 (4), 609–624. https://doi.org/10.1007/BF02295134

Kursa, M. B., & Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal of Statistical Software , 36 (11), 1–13. https://doi.org/10.18637/jss.v036.i11

Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R News, 2 (3), 12–22.

Little, R. J. A. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83 (404), 1198–1202. https://doi.org/10.1080/01621459.1988.10478722

Little, R. J. A., & Rubin, D. B. (2002). Statistical analysis with missing data (2nd ed.). Wiley. https://doi.org/10.1002/9781119013563

Muthén, B., Kaplan, D., & Hollis, M. (1987). On structural equation modeling with data that are not missing completely at random. Psychometrika, 52 (3), 431–462. https://doi.org/10.1007/BF02294365

Nicholson, J. S., Deboeck, P. R., & Howard, W. (2017). Attrition in developmental psychology: A review of modern missing data reporting and practices. International Journal of Behavioral Development, 41 (1), 143–153. https://doi.org/10.1177/0165025415618275

Park, T., & Lee, S.-Y. (1997). A test of missing completely at random for longitudinal data with missing observations. Statistics in Medicine, 16 (16), 1859–1871. https://doi.org/10.1002/(SICI)1097-0258(19970830)16:16%3c1859::AID-SIM593%3e3.0.CO;2-3

R Core Team. (2022). R: A language and environment for statistical computing . R Foundation for Statistical Computing. http://r-project.org/

Raghunathan, T. E. (2004). What Do We Do with Missing Data? Some Options for Analysis of Incomplete Data. Annual Review of Public Health, 25 (1), 99–117. https://doi.org/10.1146/annurev.publhealth.25.102802.124410

Raykov, T., & Marcoulides, G. A. (2014). Identifying Useful Auxiliary Variables for Incomplete Data Analyses. Educational and Psychological Measurement, 74 (3), 537–550. https://doi.org/10.1177/0013164413511326

Raykov, T., & West, B. T. (2016). On enhancing plausibility of the missing at random assumption in incomplete data analyses via evaluation of response-auxiliary variable correlations. Structural Equation Modeling, 23 (1), 45–53. https://doi.org/10.1080/10705511.2014.937848

Rosseel, Y. (2012). lavaan : An R package for structural equation modeling. Journal of Statistical Software , 48 (2), 1–36. https://doi.org/10.18637/jss.v048.i02

Rothacher, Y., & Strobl, C. (2023a). Identifying Informative Predictor Variables with Random Forests. Journal of Educational and Behavioral Statistics, Advance Online Publication. https://doi.org/10.3102/10769986231193327

Rothacher, Y., & Strobl, C. (2023b). Identifying Informative Predictor Variables With Random Forests. Journal of Educational and Behavioral Statistics . https://doi.org/10.3102/10769986231193327

Rubin, D. B. (1976). Inference and missing data. Biometrika, 63 (3), 581–592. https://doi.org/10.2307/2335739

Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys . Wiley.

Savalei, V., & Bentler, P. M. (2009). A two-stage approach to missing data: Theory and application to auxiliary variables. Structural Equation Modeling, 16 (3), 477–497. https://doi.org/10.1080/10705510903008238

Schafer, J. L. (1997). Analysis of incomplete multivariate data . Chapman & Hall.

Strobl, C., Boulesteix, A.-L., Kneib, T., Augustin, T., & Zeileis, A. (2008). Conditional variable importance for random forests. BMC Bioinformatics, 9 (1), 307. https://doi.org/10.1186/1471-2105-9-307

Strobl, C., Boulesteix, A.-L., Zeileis, A., & Hothorn, T. (2007). Bias in random forest variable importance measures: Illustrations, sources and a solution. BMC Bioinformatics, 8 (1), 25. https://doi.org/10.1186/1471-2105-8-25

Strobl, C., Malley, J., & Tutz, G. (2009). An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychological Methods, 14 (4), 323–348. https://doi.org/10.1037/a0016973

Tay, J. K., Narasimhan, B., & Hastie, T. (2023). Elastic Net Regularization Paths for All Generalized Linear Models. Journal of Statistical Software , 106 (1), 1–31. https://doi.org/10.18637/jss.v106.i01

Thoemmes, F., & Rose, N. (2014). A Cautious Note on Auxiliary Variables That Can Increase Bias in Missing Data Problems. Multivariate Behavioral Research, 49 (5), 443–459. https://doi.org/10.1080/00273171.2014.931799

Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso. Source: Journal of the Royal Statistical Society. Series B (Methodological) , 58 (1), 267–288. https://www.jstor.org/stable/pdf/2346178.pdf?refreqid=fastly-default%3Ab4a52e9774c54c338a7426faa2779e6e&ab_segments=&origin=&initiator=&acceptTC=1

van Ginkel, J. R., Linting, M., Rippe, R. C. A., & van der Voort, A. (2020). Rebutting existing misconceptions about multiple imputation as a method for handling missing data. Journal of Personality Assessment, 102 (3), 297–308. https://doi.org/10.1080/00223891.2018.1530680

Woods, A. D., Gerasimova, D., Dusen, B. Van, Nissen, J., Bainter, S., Uzdavines, A., Davis-Kean, P., Halvorson, M. A., King, K., Logan, J., Xu, M., Vasilev, M. R., Clay, J. M., Moreau, D., Joyal-Desmarais, K., Cruz, R. A., Brown, D., Schmidt, K., & Elsherif, M. (2023). Best Practices for Addressing Missing Data through Multiple Imputation . PsyArXiv. https://doi.org/10.31234/OSF.IO/UAEZH

Yuan, K.-H., Jamshidian, M., & Kano, Y. (2018). Missing Data Mechanisms and Homogeneity of Means and Variances-Covariances. Psychometrika, 83 (2), 425–442. https://doi.org/10.1007/s11336-018-9609-x

Zhang, Q., & Wang, L. (2017). Moderation analysis with missing data in the predictors. Psychological Methods, 22 (4), 649–666. https://doi.org/10.1037/met0000104

Download references

No funding was used to support this research.

Author information

Authors and affiliations.

Department of Psychology, Florida International University, 11200 SW 8 Street, Miami, FL, DM 381B, USA

Timothy Hayes

Department of Psychology, Oklahoma State University, Stillwater, OK, USA

Amanda N. Baraldi

Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA

Stefany Coxe

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Timothy Hayes .

Ethics declarations

Conflicts of interest.

The authors have no conflicts of interest to disclose. 

Ethics approval, Informed consent, and Consent for publication

Not applicable for the simulated data used in the paper (no human subjects participated in this theoretical, simulation research).

Open practices statement

All simulation files and worked example code are available on an Open Science Framework Repository at:  https://osf.io/q84ts/ .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 105 KB)

Supplementary file2 (pptx 136 kb), rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Hayes, T., Baraldi, A.N. & Coxe, S. Random forest analysis and lasso regression outperform traditional methods in identifying missing data auxiliary variables when the MAR mechanism is nonlinear (p.s. Stop using Little’s MCAR test). Behav Res (2024). https://doi.org/10.3758/s13428-024-02494-1

Download citation

Accepted : 19 June 2024

Published : 09 September 2024

DOI : https://doi.org/10.3758/s13428-024-02494-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Missing data
  • Auxiliary variables
  • Random forest
  • Missing at random
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. PPT

    null hypothesis linear regression

  2. Null Hypothesis for Linear Regression in Excel

    null hypothesis linear regression

  3. Understanding the Null Hypothesis for Linear Regression

    null hypothesis linear regression

  4. PPT

    null hypothesis linear regression

  5. Verifying the Assumptions of Linear Regression in Python and R Null

    null hypothesis linear regression

  6. Solved What is the null hypothesis for linear regression?

    null hypothesis linear regression

VIDEO

  1. Lecture 5. Hypothesis Testing In Simple Linear Regression Model

  2. Hypothesis Testing in Simple Linear Regression

  3. Simple linear regression hypothesis testing

  4. Hypothesis Testing

  5. Null Hypothesis

  6. Linear regression for economists: The t-test

COMMENTS

  1. Understanding the Null Hypothesis for Linear Regression

    x: The value of the predictor variable. Simple linear regression uses the following null and alternative hypotheses: H0: β1 = 0. HA: β1 ≠ 0. The null hypothesis states that the coefficient β1 is equal to zero. In other words, there is no statistically significant relationship between the predictor variable, x, and the response variable, y.

  2. 12.2.1: Hypothesis Test for Linear Regression

    The null hypothesis of a two-tailed test states that there is not a linear relationship between \(x\) and \(y\). The alternative hypothesis of a two-tailed test states that there is a significant linear relationship between \(x\) and \(y\). Either a t-test or an F-test may be used to see if the slope is significantly different from zero.

  3. PDF Chapter 9 Simple Linear Regression

    c plot.9.2 Statistical hypothesesFor simple linear regression, the chief null hypothesis is H0 : β1 = 0, and the corresponding alter. ative hypothesis is H1 : β1 6= 0. If this null hypothesis is true, then, from E(Y ) = β0 + β1x we can see that the population mean of Y is β0 for every x value, which t.

  4. Understanding the Null Hypothesis for Linear Regression

    The following examples show how to decide to reject or fail to reject the null hypothesis in both simple linear regression and multiple linear regression models. Example 1: Simple Linear Regression. Suppose a professor would like to use the number of hours studied to predict the exam score that students will receive in his class. He collects ...

  5. 6.4

    For the multiple linear regression model, there are three different hypothesis tests for slopes that one could conduct. They are: Hypothesis test for testing that all of the slope parameters are 0. Hypothesis test for testing that a subset — more than one, but not all — of the slope parameters are 0.

  6. 15.5: Hypothesis Tests for Regression Models

    Formally, our "null model" corresponds to the fairly trivial "regression" model in which we include 0 predictors, and only include the intercept term b 0. H 0:Y i =b 0 +ϵ i. If our regression model has K predictors, the "alternative model" is described using the usual formula for a multiple regression model: H1: Yi = (∑K k=1bkXik ...

  7. What is a null model in regression and how does it relate to the null

    In regression, as described partially in the other two answers, the null model is the null hypothesis that all the regression parameters are 0. So you can interpret this as saying that under the null hypothesis, there is no trend and the best estimate/predictor of a new observation is the mean, which is 0 in the case of no intercept.

  8. Linear regression

    The null hypothesis is rejected if falls outside the acceptance region. How the acceptance region is determined depends not only on the desired size of the test, but also on whether the test is: two-tailed (could be smaller or larger than ; we do not exclude either of the two possibilities) . one-tailed (only one of the two things, i.e., either smaller or larger, is possible).

  9. 3.3.4: Hypothesis Test for Simple Linear Regression

    Simple Linear Regression ANOVA Hypothesis Test Example: Rainfall and sales of sunglasses We will now describe a hypothesis test to determine if the regression model is meaningful; in other words, does the value of \(X\) in any way help predict the expected value of \(Y\)?

  10. Simple linear regression

    Interpreting the hypothesis test# If we reject the null hypothesis, can we assume there is an exact linear relationship? No. A quadratic relationship may be a better fit, for example. This test assumes the simple linear regression model is correct which precludes a quadratic relationship. If we don't reject the null hypothesis, ...

  11. 5.7

    The reduced model is the model that the null hypothesis describes. Because the null hypothesis sets each of the slope parameters in the full model equal to 0, the reduced model is: ... For the multiple linear regression model, there are three different hypothesis tests for slopes that one could conduct. They are: Hypothesis test for testing ...

  12. Linear regression hypothesis testing: Concepts, Examples

    Here are key steps of doing hypothesis tests with linear regression models: Formulate null and alternate hypotheses: The first step of hypothesis testing is to formulate the null and alternate hypotheses. The null hypothesis (H0) is a statement that represents the state of the real world where the truth about something needs to be justified.

  13. Understanding the t-Test in Linear Regression

    Whenever we perform linear regression, we want to know if there is a statistically significant relationship between the predictor variable and the response variable. We test for significance by performing a t-test for the regression slope. We use the following null and alternative hypothesis for this t-test: H 0: β 1 = 0 (the slope is equal to ...

  14. Null Hypothesis: Definition, Rejecting & Examples

    When your sample contains sufficient evidence, you can reject the null and conclude that the effect is statistically significant. Statisticians often denote the null hypothesis as H 0 or H A.. Null Hypothesis H 0: No effect exists in the population.; Alternative Hypothesis H A: The effect exists in the population.; In every study or experiment, researchers assess an effect or relationship.

  15. Null & Alternative Hypotheses

    The null hypothesis (H0) answers "No, there's no effect in the population.". The alternative hypothesis (Ha) answers "Yes, there is an effect in the population.". The null and alternative are always claims about the population. That's because the goal of hypothesis testing is to make inferences about a population based on a sample.

  16. PDF Lecture 5 Hypothesis Testing in Multiple Linear Regression

    As in simple linear regression, under the null hypothesis t 0 = βˆ j seˆ(βˆ j) ∼ t n−p−1. We reject H 0 if |t 0| > t n−p−1,1−α/2. This is a partial test because βˆ j depends on all of the other predictors x i, i 6= j that are in the model. Thus, this is a test of the contribution of x j given the other predictors in the model.

  17. Null hypothesis for linear regression

    6. I am confused about the null hypothesis for linear regression. If a variable in a linear model has p <0.05 p <0.05 (when R prints out stars), I would say the variable is a statistically significant part of the model.

  18. Linear Regression With R

    Null and alternate hypothesis. When there is a p-value, there is a hull and alternative hypothesis associated with it. In Linear Regression, the Null Hypothesis is that the coefficients associated with the variables is equal to zero.

  19. 5.6

    The "reduced model," which is sometimes also referred to as the "restricted model," is the model described by the null hypothesis H 0. For simple linear regression, a common null hypothesis is H 0: β 1 = 0. In this case, the reduced model is obtained by "zeroing-out" the slope β 1 that appears in the full model. That is, the reduced model is:

  20. How to Check Linear Regression Assumptions (and What to Do If They Fail

    Linear regression is a cornerstone of statistical modeling, widely employed in various fields, from economics and finance to social sciences and engineering. ... The null hypothesis is homoscedasticity. If the p-value is significant (typically less than 0.05), it suggests heteroscedasticity. Example:

  21. Hypothesis Test for Regression Slope

    Hypothesis Test for Regression Slope. This lesson describes how to conduct a hypothesis test to determine whether there is a significant linear relationship between an independent variable X and a dependent variable Y.. The test focuses on the slope of the regression line Y = Β 0 + Β 1 X. where Β 0 is a constant, Β 1 is the slope (also called the regression coefficient), X is the value of ...

  22. Null hypothesis for ANOVA for regression

    For simple linear regression, the null hypothesis for the ANOVA is that the regression model (fit line) is identical to a simpler model (horizontal line). In other words, the null hypothesis is that the slope is actually zero. Also note that the term "linearity" is not really defined in the question or the answers, and can be misleading.

  23. Significance test for linear regression: how to test without

    They provided a simple explanation of the problem in making an inference from p -value, for example, if the p -value is less than 0.05, we have enough evidence to reject the null hypothesis and accept the claim. By this conviction in the regression framework, we must reject the null hypothesis ( H 0: β = 0).

  24. ANOVA, Hypothesis Testing, Descriptive Statistics, Regression

    A regression analysis between sales (y in $1000) and advertising (x in dollars) resulted in the following equation: y-hat = 30,000 + 4x\\The above equation implies that an increase of 1 in advertising is associated with increase of 4000 in sales A standard normal distribution is one that has zero mean and variance = 1 Regression analysis is a statistical procedure for developing a mathematical ...

  25. Changing null hypothesis in linear regression

    Your hypothesis can be expressed as Rβ = r R β = r where β β is your regression coefficients and R R is restriction matrix with r r the restrictions. If our model is. y = β +β x + u y = β + β x + u. then for hypothesis β1 = 0 β 1 = 0, R = [0, 1] R = [0, 1] and r = 1 r = 1.

  26. Regression-based joint orthogonality tests of balance can over-reject

    A standard way for testing for balance between treatment and control groups is to regress a treatment indicator on a set of covariates, and then use an F-test to test the null hypothesis of joint orthogonality. However, a new paper shows that this test can over-reject the null substantially when sample sizes are small or the number of covariates large. Randomization inference approaches can be ...

  27. Random forest analysis and lasso regression outperform traditional

    Under a linear missing data mechanism (second column in Table 8) known in the literature to affect variable means (and mean differences), Little's test performed as expected, correctly rejecting the null hypothesis of MCAR missing data with greater statistical power as both the missing data rate and sample size increased.