Skip to contents

Represents a neural network using a Graph that contains mostly PipeOpModules.

Usage

nn_graph(graph, shapes_in, output_map = graph$output$name, list_output = FALSE)

Arguments

graph

(Graph)
The Graph to wrap. Is not cloned.

shapes_in

(named integer)
Shape info of tensors that go into graph. Names must be graph$input$name, possibly in different order.

output_map

(character)
Which of graph's outputs to use. Must be a subset of graph$output$name.

list_output

(logical(1))
Whether output should be a list of tensors. If FALSE (default), then length(output_map) must be 1.

Value

nn_graph

Fields

  • graph :: Graph
    The graph (consisting primarily of PipeOpModules) that is wrapped by the network.

  • input_map :: character()
    The names of the input arguments of the network.

  • shapes_in :: list()
    The shapes of the input tensors of the network.

  • output_map :: character()
    Which output elements of the graph are returned by the $forward() method.

  • list_output :: logical(1)
    Whether the output is a list of tensors.

  • module_list :: nn_module_list
    The list of modules in the network.

  • list_output :: logical(1)
    Whether the output is a list of tensors.

Examples

graph = mlr3pipelines::Graph$new()
graph$add_pipeop(po("module_1", module = nn_linear(10, 20)), clone = FALSE)
graph$add_pipeop(po("module_2", module = nn_relu()), clone = FALSE)
graph$add_pipeop(po("module_3", module = nn_linear(20, 1)), clone = FALSE)
graph$add_edge("module_1", "module_2")
graph$add_edge("module_2", "module_3")

network = nn_graph(graph, shapes_in = list(module_1.input = c(NA, 10)))

x = torch_randn(16, 10)

network(module_1.input = x)
#> torch_tensor
#> 0.001 *
#> -6.6333
#>  -465.7582
#>  -167.5749
#>  -433.0113
#>  -152.4274
#>  -261.1258
#>  -366.3096
#>  -139.6945
#>  -948.1930
#>  -162.9206
#>  -683.4478
#>  -253.8367
#>  -314.5595
#>  -82.4146
#>  -486.0691
#>  -435.3711
#> [ CPUFloatType{16,1} ][ grad_fn = <AddmmBackward0> ]