- Advanced Modeling
Tags
- Data Management
- Linear Regression
- R Programming
We have previously added a set of company identity-agnostic predictors, such as the number of drivers a company employs, or the number of vehicles in the fleet with a hydraulic lift, and so on. we took this approach, rather than having each company as a unique predictor, so that the addition of a new contractor would not (necessarily) confuse our model.
I understand that this means company history, specific vehicle history, and so on, are not included. Nevertheless, we want to see how much information is in the new predictors.
Data exploration
1 |
|
There are already some interesting insights from these data. Drivers appear more likely to alert OPT for delays which end up shorter. Companies with a lot of ambulatory seats in their fleet suffer shorter delays. Services closer to peak hour suffer lower delays, something of a surprise. Companies with a lot of attendants and hydraulic lifts suffer longer delays. Having a high number of pre-K services disposes a company towards less delay; having a high number of school services disposes them towards more delay. And so on. We also see that there is structure from the original data, including different delays for different boroughs, and for different reasons.
One of the first questions we might ask of all these predictors is: are they correlated with each other? If they are not, then perhaps we could, at some stage, run a naive Bayesian classification. Let’s take a look at the cross-correlations between some of the predictors. We include ‘time_delayed’ for reference:
1 |
|
Even in this reduced set of ‘more important’ predictors, we can see that many predictors correlate more strongly with each other than with the target variable (time_delayed). This seems pretty bad news for the independence assumption of naive Bayes. The data set also might be a candidate for dimensionality reduction, but at this stage I wanted to keep things as human-transparent as possible.
Building a Cubist regression model with the extra predictors
This next part is straightforward. We go through essentially the same process that we did before, but this time we are armed with the extra predictors. If these predictors are useful, our RMSE should be lower and our (R^2) should be higher.
1 |
|
We have more than doubled our (R^2) value, although we are still below 0.5. Our time_delayed vs time_predicted is starting to cluster more around the line of perfect prediction. We are still mispredicting a lot of delay times, but we are certainly doing better than before.
While there is possibly more to be done to improve our predictors, we can guess that we might not get too much further with this approach. We know that our data have a problem related to how delays have been reported, and it isn’t going to go away. So perhaps we ought to try the problem as a classification: how well can we predict if delay time will be above some arbitrary threshold? This is the approach we take next.
Related Post
- NYC buses: simple Cubist regression
- Visualization of NYC bus delays with R
- Explaining Black-Box Machine Learning Models – Code Part 2: Text classification with LIME
- Image clustering with Keras and k-Means
- ‘How do neural nets learn?’ A step by step explanation using the H2O Deep Learning algorithm.
Related