vtreat
is a powerful R
package for preparing messy real-world data for machine learning. We have further extended the package with a number of features including rquery/rqdatatable integration (allowing vtreat application at scale on Apache Spark or data.table!).
In addition vtreat
and can now effectively prepare data for multi-class classification or multinomial modeling.
The two functions needed (mkCrossFrameMExperiment()
and the S3
method prepare.multinomial_plan()
) are now part of vtreat
.
Let’s work a specific example: trying to model multi-class y
as a function of x1
and x2
.
1 |
|
0.8178292 | b | 3 | Large2 |
0.5867139 | b | 3 | Large2 |
-0.6711920 | a | 3 | Large2 |
0.1033166 | c | 2 | NoInfo |
-0.3182176 | b | 1 | NoInfo |
-0.5914308 | c | 2 | NoInfo |
We define the problem controls and use mkCrossFrameMExperiment()
to build both a cross-frame and a treatment plan.
1 |
|
The cross-frame is the entity safest for training on (unless you have made separate data split for the treatment design step). It uses cross-validation to reduce nested model bias. Some notes on this issue are available here, and here.
1 |
|
1 |
|
prepare()
can apply the designed treatments to new data. Here we are simulating new data by re-using our design data.
1 |
|
1 |
|
We can easily estimate per-outcome variable importance and per-variable variable importance.
1 |
|
x1_clean | 0.0558908 | 0.0000382 | Large1 |
x2_catP | 0.0275238 | 0.0038536 | Large1 |
x2_lev_x_a | 0.2680953 | 0.0000000 | Large1 |
x2_lev_x_b | 0.0885021 | 0.0000002 | Large1 |
x2_lev_x_c | 0.1060407 | 0.0000000 | Large1 |
x3_catP | 0.0000346 | 0.9183445 | Large1 |
x3_lev_x_1 | 0.0141504 | 0.0382554 | Large1 |
x3_lev_x_2 | 0.0140364 | 0.0390420 | Large1 |
x3_lev_x_3 | 0.0955004 | 0.0000001 | Large1 |
x1_clean | 0.0015382 | 0.1615618 | Large2 |
x2_catP | 0.0013055 | 0.1971725 | Large2 |
x2_lev_x_a | 0.0000387 | 0.8242956 | Large2 |
x2_lev_x_b | 0.0014571 | 0.1730603 | Large2 |
x2_lev_x_c | 0.0009604 | 0.2686774 | Large2 |
x3_catP | 0.0007725 | 0.3211959 | Large2 |
x3_lev_x_1 | 0.2602002 | 0.0000000 | Large2 |
x3_lev_x_2 | 0.2483708 | 0.0000000 | Large2 |
x3_lev_x_3 | 0.9197595 | 0.0000000 | Large2 |
x1_clean | 0.0064771 | 0.0034947 | NoInfo |
x2_catP | 0.0040540 | 0.0208595 | NoInfo |
x2_lev_x_a | 0.0071709 | 0.0021196 | NoInfo |
x2_lev_x_b | 0.0000340 | 0.8323647 | NoInfo |
x2_lev_x_c | 0.0060493 | 0.0047665 | NoInfo |
x3_catP | 0.0006576 | 0.3520950 | NoInfo |
x3_lev_x_1 | 0.1838759 | 0.0000000 | NoInfo |
x3_lev_x_2 | 0.1857824 | 0.0000000 | NoInfo |
x3_lev_x_3 | 0.7372570 | 0.0000000 | NoInfo |
Large1_x2_catB | 0.2675964 | 0.0000000 | Large1 |
Large1_x3_catB | 0.0946910 | 0.0000001 | Large1 |
Large2_x2_catB | 0.0000291 | 0.8472707 | Large2 |
Large2_x3_catB | 0.9239860 | 0.0000000 | Large2 |
NoInfo_x2_catB | 0.0068238 | 0.0027207 | NoInfo |
NoInfo_x3_catB | 0.7326682 | 0.0000000 | NoInfo |
One can relate these per-target and per-treatment performances back to original columns by aggregating.
1 |
|
1 |
|
1 |
|
1 |
|
Obvious issues include: computing variable importance, and blow up and co-dependency of produced columns. These we leave for the next modeling step to deal with (this is our philosophy with most issues that involve joint distributions of variables).
Like this:
Like Loading…
Related