Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a \((j \times 1 \times n \times m)\) tensor andother
is a \((k \times m \times p)\) tensor,out
will be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> -3.1755 -0.8197 -1.6410 1.2531 1.3869
#> 0.8883 -1.6220 0.1196 -0.5877 0.7253
#> -2.2273 -2.4526 -2.0889 -0.2598 1.7968
#>
#> (2,.,.) =
#> 1.1940 -1.5529 -1.3216 -0.0645 3.4043
#> -0.8836 0.2056 -0.4627 -0.1654 -0.4402
#> -0.2359 -1.9771 -0.6906 -0.3362 1.3155
#>
#> (3,.,.) =
#> 2.7033 -0.7362 0.3108 -1.7578 0.1333
#> 2.4591 1.8003 0.8733 0.1122 0.1121
#> -4.5826 2.3973 -2.1614 -0.2593 -2.5692
#>
#> (4,.,.) =
#> -1.8950 -2.0356 -1.5786 -1.2738 0.0011
#> -1.3114 3.1493 1.8424 1.2198 -3.6264
#> -4.1321 1.1383 -3.4148 0.8041 1.3177
#>
#> (5,.,.) =
#> -1.2444 -1.6165 -2.4460 -0.0728 2.8524
#> 0.1638 1.0144 0.1142 -0.1725 -0.7413
#> -3.5386 0.4889 -1.8599 -0.0119 -0.8103
#>
#> (6,.,.) =
#> -2.6408 -0.7031 -1.3456 0.0279 -0.0614
#> 3.9895 -2.5311 1.9181 -0.3610 1.7256
#> -2.1188 0.0392 -0.6007 1.2825 0.5182
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]