Reusable modeling pipelines are a practical idea that gets re-developed many times in many contexts. wrapr
supplies a particularly powerful pipeline notation, and a pipe-stage re-use system (notes here). We will demonstrate this with the vtreat
data preparation system.
Our example task is to fit a model on some arbitrary data. Our model will try to predict y
as a function of the other columns.
Our example data is 10,000 rows of 210 variables. Ten of the variables are related to the outcome to predict (y
), and 200 of them are irrelevant pure noise. Since this is a synthetic example we know which is which (and deliberately encode this information in the column names).
The data looks like the following:
1 |
|
Let’s start our example analysis.
We load our packages.
1 |
|
We will also need a couple of simple functions that are part of the upcoming vtreat 1.3.3
.
1 |
|
We set up a parallel cluster to speed up some calculations.
1 |
|
We split our data into training and a test evaluation set.
1 |
|
Suppose our analysis plan is the following:
-
Fix missing values with
vtreat
. -
Scale and center the original variables (but not the new indicator variables).
-
Model
y
as a function of the other columns usingglmnet
.
Now both vtreat
and glmnet
can scale, but we are going to keep the scaling as a separate step to control which variables are scaled, and to show how composite data preparation pipelines work.
We fit a model with cross-validated data treatment and hyper-parameters as follows. The process described is intentionally long and involved, simulating a number of steps (possibly some requiring domain knowledge) taken by a data scientist to build a good model.
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
1 |
|
pf <- data.frame(s = cross_scores$lambdas[[best_i]], cvm = cross_scores$cvm[[best_i]]) ggplot(data = pf, aes(x = s, y = cvm)) + geom_point() + geom_line() + scale_x_log10() + ggtitle(“cross validated mean loss as function of lambda/s”, subtitle = paste(“alpha =”, alpha))
re-fit model with chosen alpha
model <- glmnet(as.matrix(tfs), cp$crossFrame[[outcome_name]], alpha = alpha, family = “gaussian”, standardize = FALSE, lambda = lambdas)
At this point we have model that works on prepared data (data that has gone through the vtreat
and scaling steps). The point to remember: it was a lot of steps to transform data and build the model, so it may also be a fair number of steps to apply the model.
The question then is: how do we share such a model? Roughly we need to share the model, any fit parameters (such as centering and scaling choices), and the code sequence to apply all of these steps in the proper order. In this case the modeling pipeline consists of the following pieces:
- The treatment plan
cp$treatments
. - The list of chosen variables
newvars
. - The centering and scaling vectors
centering
andscaling
. - The
glmnet
modelmodel
and final chosen lambda/s values
.
These values are needed to run any news data through the sequence of operations:
- Using
vtreat
to prepare the data. - Re-scaling and centering the chosen variables.
- Converting from a
data.frame
to a matrix of only input-variable columns. - Applying the
glmnet
model. - Converting the matrix of predictions into a vector of predictions.
These are all steps we did in an ad-hoc manner while building the model. Having worked hard to build the model (taking a lot of steps and optimizing parameters/hyperparemeters) has left us with a lot of items and steps we need to share to have the full prediction process.
A really neat way to simply share of these things is the following.
Use wrapr
’s “function object” abstraction, which treats names of functions, plus arguments as an efficient notation for partial evaluation. We can use this system to encode our model prediction pipeline as follows.
pipeline <- pkgfn(“vtreat::prepare”, arg_name = “dframe”, args = list(treatmentplan = cp$treatments, varRestriction = newvars)) %.>% wrapfn(center_scale, arg_name = “d”, args = list(center = centering, scale = scaling)) %.>% srcfn(qe(as.matrix(.[, newvars, drop = FALSE])), args = list(newvars = newvars)) %.>% pkgfn(“glmnet::predict.glmnet”, arg_name = “newx”, args = list(object = model, s = s)) %.>% srcfn(qe(.[, cname, drop = TRUE]), args = list(cname = “1”))
cat(format(pipeline))
UnaryFnList(
vtreat::prepare(dframe=., treatmentplan, varRestriction),
PartialFunction{center_scale}(d=., center, scale),
SrcFunction{ as.matrix(.[, newvars, drop = FALSE]) }(.=., newvars),
glmnet::predict.glmnet(newx=., object, s),
SrcFunction{ .[, cname, drop = TRUE] }(.=., cname))
The above pipeline uses several wrapr
abstractions:
pkgfn()
which wraps a function specified by a package qualified name. When used the function is called with the pipeline argument as the first argument (and namedarg_name
), and extra arguments supplied from the listargs
.wrapfn()
which wraps a function specified by value. When used the function is called with the pipeline argument as the first argument (and namedarg_name
), and extra arguments supplied from the listargs
.srcfn()
which wraps quoted code (here quoted bywrapr::qe()
, but quote marks will also work). When used the function is evaluated in an environment with the pipeline argument mapped to the name specified inarg_name
(defaults to.
), and the additional arguments fromargs
available in the evaluation environment.
Each of these captures the action and extra values needed to perform each step of the model application. The steps can be chained together by pipes (as shown above), or assembled directly as a list using fnlist()
or as_fnlist(). Function lists can be built all at once, or concatenated together from pieces. More details on wrapr
function objects can be found here.
After all this you can then pipe data into the pipeline to get predictions.
dTrain %.>% pipeline %.>% head(.)
## [1] -0.60372843 0.46662315 0.15205810 0.39812493 0.44087441 0.09160836
dTest %.>% pipeline %.>% head(.)
[1] 0.532070422 -0.046165380 -1.347887772 0.007668392 -1.133345162
[6] 0.662722678
Or you can use a functional notation ApplyTo()
.
head(ApplyTo(pipeline, dTrain))
## [1] -0.60372843 0.46662315 0.15205810 0.39812493 0.44087441 0.09160836
The pipeline itself is an R
S4
class containing a simple list of steps.
[[1]]
[1] “vtreat::prepare(dframe=., treatmentplan, varRestriction)”
##
[[2]]
[1] “PartialFunction{center_scale}(d=., center, scale)”
##
[[3]]
[1] “SrcFunction{ as.matrix(.[, newvars, drop = FALSE]) }(.=., newvars)”
##
[[4]]
[1] “glmnet::predict.glmnet(newx=., object, s)”
##
[[5]]
[1] “SrcFunction{ .[, cname, drop = TRUE] }(.=., cname)”
str(pipeline@items[[3]])
Formal class ‘SrcFunction’ [package “wrapr”] with 3 slots
..@ expr_src: chr “as.matrix(.[, newvars, drop = FALSE])”
..@ arg_name: chr “.”
..@ args :List of 1
.. ..$ newvars: chr [1:21] “var_001_clean” “var_001_isBAD” “var_002_clean” “var_002_isBAD” …
The pipeline can be saved, and contains the required parameters in simple lists.
saveRDS(dTrain, “dTrain.RDS”) saveRDS(pipeline, “pipeline.RDS”)
Now the processing pipeline can be read back and used as follows.
Fresh R session , not part of this markdown
library(“wrapr”)
pipeline <- readRDS(“pipeline.RDS”) dTrain <- readRDS(“dTrain.RDS”) dTrain %.>% pipeline %.>% head(.)
## [1] -0.60372843 0.46662315 0.15205810 0.39812493 0.44087441 0.09160836
We can use this pipeline on different data, as we do to create performance plots below.
dTrain$prediction <- dTrain %.>% pipeline
WVPlots::ScatterHist( dTrain, “prediction”, “y”, “fit on training data”, smoothmethod = “identity”, estimate_sig = TRUE, point_alpha = 0.1, contour = TRUE)
dTest$prediction <- dTest %.>% pipeline
WVPlots::ScatterHist( dTest, “prediction”, “y”, “fit on test”, smoothmethod = “identity”, estimate_sig = TRUE, point_alpha = 0.1, contour = TRUE)
1 |
|
At this point we have model that works on prepared data (data that has gone through the vtreat
and scaling steps). The point to remember: it was a lot of steps to transform data and build the model, so it may also be a fair number of steps to apply the model.
These values are needed to run any news data through the sequence of operations:
-
Using
vtreat
to prepare the data. -
Re-scaling and centering the chosen variables.
-
Converting from a
data.frame
to a matrix of only input-variable columns. -
Applying the
glmnet
model. -
Converting the matrix of predictions into a vector of predictions.
A really neat way to simply share of these things is the following.
1 |
|
The above pipeline uses several wrapr
abstractions:
After all this you can then pipe data into the pipeline to get predictions.
1 |
|
1 |
|
1 |
|
The pipeline itself is an R
S4
class containing a simple list of steps.
1 |
|
1 |
|
Now the processing pipeline can be read back and used as follows.
1 |
|
1 |
|
The idea is: the work was complicated, but sharing should not be complicated.
And that is how to effectively save, share, and deploy non-trivial modeling workflows.
(The source for this example can be found here. More on wrapr
function objects can be found here. We also have another run here showing why we do not recommend always using the number of variables as “just another hyper-parameter”, but instead using simple threshold based filtering. The coming version of vtreat
also has a new non-linear variable filter function called value_variables_*().)
1 |
|
Like this:
Like Loading…
Related