Fully connected feed forward network with dropout after each activation function.
The features can either be a single lazy_tensor or one or more numeric columns (but not both).
Parameters
Parameters from LearnerTorch, as well as:
activation::[nn_module]
The activation function. Is initialized tonn_relu.activation_args:: namedlist()
A named list with initialization arguments for the activation function. This is intialized to an empty list.neurons::integer()
The number of neurons per hidden layer. By default there is no hidden layer. Setting this toc(10, 20)would have a the first hidden layer with 10 neurons and the second with 20.n_layers::integer()
The number of layers. This parameter must only be set whenneuronshas length 1.p::numeric(1)
The dropout probability. Is initialized to0.5.shape::integer()orNULL
The input shape of length 2, e.g.c(NA, 5). Only needs to be present when there is a lazy tensor input with unknown shape (NULL). Otherwise the input shape is inferred from the number of numeric features.
References
Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.
Super classes
mlr3::Learner -> mlr3torch::LearnerTorch -> LearnerTorchMLP
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3torch::LearnerTorch$dataset()mlr3torch::LearnerTorch$format()mlr3torch::LearnerTorch$marshal()mlr3torch::LearnerTorch$print()mlr3torch::LearnerTorch$unmarshal()
Method new()
Creates a new instance of this R6 class.
Usage
LearnerTorchMLP$new(
task_type,
optimizer = NULL,
loss = NULL,
callbacks = list()
)Arguments
task_type(
character(1))
The task type, either"classif" or"regr".optimizer(
TorchOptimizer)
The optimizer to use for training. Per default, adam is used.loss(
TorchLoss)
The loss used to train the network. Per default, mse is used for regression and cross_entropy for classification.callbacks(
list()ofTorchCallbacks)
The callbacks. Must have unique ids.
Examples
# Define the Learner and set parameter values
learner = lrn("classif.mlp")
learner$param_set$set_values(
epochs = 1, batch_size = 16, device = "cpu",
neurons = 10
)
# Define a Task
task = tsk("iris")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> classif.ce
#> 0.66