Skip to contents

Represents a neural network using a Graph that contains mostly PipeOpModules.

Usage

nn_graph(graph, shapes_in, output_map = graph$output$name, list_output = FALSE)

Arguments

graph

(Graph)
The Graph to wrap. Is not cloned.

shapes_in

(named integer)
Shape info of tensors that go into graph. Names must be graph$input$name, possibly in different order.

output_map

(character)
Which of graph's outputs to use. Must be a subset of graph$output$name.

list_output

(logical(1))
Whether output should be a list of tensors. If FALSE (default), then length(output_map) must be 1.

Value

nn_graph

Fields

  • graph :: Graph
    The graph (consisting primarily of PipeOpModules) that is wrapped by the network.

  • input_map :: character()
    The names of the input arguments of the network.

  • shapes_in :: list()
    The shapes of the input tensors of the network.

  • output_map :: character()
    Which output elements of the graph are returned by the $forward() method.

  • list_output :: logical(1)
    Whether the output is a list of tensors.

  • module_list :: nn_module_list
    The list of modules in the network.

  • list_output :: logical(1)
    Whether the output is a list of tensors.

Examples

graph = mlr3pipelines::Graph$new()
graph$add_pipeop(po("module_1", module = nn_linear(10, 20)), clone = FALSE)
graph$add_pipeop(po("module_2", module = nn_relu()), clone = FALSE)
graph$add_pipeop(po("module_3", module = nn_linear(20, 1)), clone = FALSE)
graph$add_edge("module_1", "module_2")
graph$add_edge("module_2", "module_3")

network = nn_graph(graph, shapes_in = list(module_1.input = c(NA, 10)))

x = torch_randn(16, 10)

network(module_1.input = x)
#> torch_tensor
#> -0.3387
#>  0.4314
#> -0.0250
#>  0.0625
#>  0.0520
#>  0.1413
#>  0.3711
#>  0.0675
#>  0.0952
#>  0.0680
#> -0.4348
#>  0.3879
#>  0.2678
#> -0.0356
#> -0.0562
#>  0.3363
#> [ CPUFloatType{16,1} ][ grad_fn = <AddmmBackward0> ]