Skip to contents

Represents a neural network using a Graph that contains mostly PipeOpModules.

Usage

nn_graph(graph, shapes_in, output_map = graph$output$name, list_output = FALSE)

Arguments

graph

(Graph)
The Graph to wrap. Is not cloned.

shapes_in

(named integer)
Shape info of tensors that go into graph. Names must be graph$input$name, possibly in different order.

output_map

(character)
Which of graph's outputs to use. Must be a subset of graph$output$name.

list_output

(logical(1))
Whether output should be a list of tensors. If FALSE (default), then length(output_map) must be 1.

Value

nn_graph

Fields

  • graph :: Graph
    The graph (consisting primarily of PipeOpModules) that is wrapped by the network.

  • input_map :: character()
    The names of the input arguments of the network.

  • shapes_in :: list()
    The shapes of the input tensors of the network.

  • output_map :: character()
    Which output elements of the graph are returned by the $forward() method.

  • list_output :: logical(1)
    Whether the output is a list of tensors.

  • module_list :: nn_module_list
    The list of modules in the network.

  • list_output :: logical(1)
    Whether the output is a list of tensors.

Examples

graph = mlr3pipelines::Graph$new()
graph$add_pipeop(po("module_1", module = nn_linear(10, 20)), clone = FALSE)
graph$add_pipeop(po("module_2", module = nn_relu()), clone = FALSE)
graph$add_pipeop(po("module_3", module = nn_linear(20, 1)), clone = FALSE)
graph$add_edge("module_1", "module_2")
graph$add_edge("module_2", "module_3")

network = nn_graph(graph, shapes_in = list(module_1.input = c(NA, 10)))

x = torch_randn(16, 10)

network(module_1.input = x)
#> torch_tensor
#>  0.1273
#>  0.1162
#>  0.1482
#>  0.1670
#>  0.0527
#> -0.1087
#>  0.0917
#>  0.2472
#>  0.1944
#> -0.0939
#>  0.0297
#>  0.2835
#>  0.1391
#>  0.2677
#>  0.0080
#>  0.2626
#> [ CPUFloatType{16,1} ][ grad_fn = <AddmmBackward0> ]