A deep dive into glmnet: offset

offset

According to the official R documentation, offset should be

A vector of length nobs that is included in the linear predictor (a nobs x nc matrix for the “multinomial� family).

Its default value is NULL: in that case, glmnet internally sets the offset to be a vector of zeros having the same length as the response y.

Here is some example code for using the offset option:

If we specify offset in the glmnet call, then when making predictions with the model, we must specify the newoffset option. For example, if we want the predictions fit1 gives us at for the training data, not specifying newoffset will give us an error:

This is the correct code:

So, what does offset actually do (or mean)? Recall that glmnet is fitting a linear model. More concretely, our data is , where the are our features for observation and is the response for observation . For each observation, we are trying to model some variable as a linear combination of the features, i.e. . is a function of ; the function depends on the context. For example,

For ordinary regression, , i.e. the response itself. For logistic regression, . For Poisson regression, .

So, we are trying to find and so that is a good estimate for . If we have an offset , then we are trying to find and so that is a good estimate for .

Why might we want to use offsets? There are two primary reasons for them stated in the documentation:

Useful for the “poisson� family (e.g. log of exposure time), or for refining a model by starting at a current fit.

Let me elaborate. First, offsets are useful for Poisson regression. The official vignette has a little section explaining this; let me explain it through an example.

Imagine that we are trying to predict how many points an NBA basketball player will score per minute based on his physical attributes. If the player’s physical attributes (i.e. the covariates of our model) are denoted by and then the number of points he scores in a minute is denoted by , then Poisson regression assumes that

and are parameters of the model to be determined.

Having described the model, let’s turn to our data. For each player , we have physical covariates . However, instead of having each player’s points per minute, we have number of points scored over a certain time period. For example, we might have “player 1 scored 12 points over 30 minutes� instead of “player 1 scored 0.4 points per minute�.

Offsets allow us to use our data as is. In our example above, loosely speaking 12/30 (points per minute) is our estimate for . Hence, 12 (points in 30 minutes) is our estimate for . In our model, is our estimate for , and so our estimate for would be . The term is the “offset� to get the model prediction for our data as is.

Taking this to the full dataset: if player scores points in minutes, then our offset would be the vector , and the response we would feed glmnet is .

The second reason one might want to use offsets is to improve on an existing model. Continuing the example above: say we have a friend who has trained a model (not necessarily a linear model) to predict , but he did not use the player’s physical attributes. We think that we can improve on his predictions by adding physical attributes to the model. One refinement to our friend’s model could be

where is the prediction of from our friend’s model. In this setting, the offsets are simply our friend’s predictions. For model training, we would provide the first model’s predictions on the training observations as the offset. To get predictions from the refinement on new observations, we would first compute the predictions from the first model, then use them as the newoffset option in the predict call.

Related