Configures the optimizer of a deep learning model.
Parameters
The parameters are defined dynamically from the optimizer that is set during construction.
Input and Output Channels
There is one input channel "input"
and one output channel "output"
.
During training, the channels are of class ModelDescriptor
.
During prediction, the channels are of class Task
.
Internals
During training, the optimizer is cloned and added to the ModelDescriptor
.
Note that the parameter set of the stored TorchOptimizer
is reference-identical to the parameter set of the
pipeop itself.
See also
Other PipeOp:
mlr_pipeops_module
,
mlr_pipeops_torch_callbacks
Other Model Configuration:
ModelDescriptor()
,
mlr_pipeops_torch_callbacks
,
mlr_pipeops_torch_loss
,
model_descriptor_union()
Super class
mlr3pipelines::PipeOp
-> PipeOpTorchOptimizer
Methods
Method new()
Creates a new instance of this R6 class.
Usage
PipeOpTorchOptimizer$new(
optimizer = t_opt("adam"),
id = "torch_optimizer",
param_vals = list()
)
Arguments
optimizer
(
TorchOptimizer
orcharacter(1)
ortorch_optimizer_generator
)
The optimizer (or something convertible viaas_torch_optimizer()
).id
(
character(1)
)
Identifier of the resulting object.param_vals
(
list()
)
List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction.
Examples
po_opt = po("torch_optimizer", "sgd", lr = 0.01)
po_opt$param_set
#> <ParamSet(5)>
#> id class lower upper nlevels default value
#> <char> <char> <num> <num> <num> <list> <list>
#> 1: lr ParamDbl 0 Inf Inf <NoDefault[0]> 0.01
#> 2: momentum ParamDbl 0 1 Inf 0 [NULL]
#> 3: dampening ParamDbl 0 1 Inf 0 [NULL]
#> 4: weight_decay ParamDbl 0 1 Inf 0 [NULL]
#> 5: nesterov ParamLgl NA NA 2 FALSE [NULL]
mdin = po("torch_ingress_num")$train(list(tsk("iris")))
mdin[[1L]]$optimizer
#> NULL
mdout = po_opt$train(mdin)
mdout[[1L]]$optimizer
#> <TorchOptimizer:sgd> Stochastic Gradient Descent
#> * Generator: optim_ignite_sgd
#> * Parameters: lr=0.01
#> * Packages: torch,mlr3torch