For those who will laugh at seeing deep learning with one hidden layer and the Iris data set of 150 records, I will say: you’re perfectly right The goal at this stage is simply to take the first steps
fit a regression model manually (hard way)
Subject: predict Sepal.Length given other Iris parameters1st with gradient descent and default hyper-parameters value for learning rate (0.001) and mini batch size (32)
input
output
input
output
:-[] no pain, no gain …
After some manual fine tuning on learning rate, mini batch size and iterations number (epochs):
input
output
input
output
Better result, but with human efforts!
fit a regression model automatically (easy way, Mix 1)
Same subject: predict Sepal.Length given other Iris parameters
input
output
input
output
It’s even better, with no human efforts but machine timeUsers on Windows won’t benefit from parallelization, the function uses parallel package included with R base…
fit a regression model experimentally (experimental way, Mix 2)
Same subject: predict Sepal.Length given other Iris parameters
input
output
input
output
Pretty good too, even better!But time consuming on larger datasets: where gradient descent should be preferred in this case
fit a regression model with custom cost (experimental way, Mix 2)
Same subject: predict Sepal.Length given other Iris parametersLet’s try with Mean Absolute Percentage Error instead of Mean Square Error
input
output
input
output
fit a classification model with softmax (Mix 2)
Subject: predict Species given other Iris parametersSoftmax is available with PSO, no derivative needed
input
output
input
output
change the model parameters (shape …)
Same subject: predict Species given other Iris parameters1st example: with gradient descent and 2 hidden layers containing 10 nodes, with various activation functions for hidden layers
input
nb: last activation type may be left to blank (it will be set automatically)
2nd example: with gradient descent and no hidden layer (logistic regression)
input
ToDo List
-
transfert learning from existing frameworks
-
add autotune to other parameters (layers, dropout, …)
-
CNN
-
RNN
join the team !https://github.com/aboulaboul/automl
Related