Matmul
Source:R/gen-namespace-docs.R, R/gen-namespace-examples.R, R/gen-namespace.R
torch_matmul.RdMatmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
inputis a \((j \times 1 \times n \times m)\) tensor andotheris a \((k \times m \times p)\) tensor,outwill be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> -0.1923 0.4107 -0.1598 0.4698 -0.3438
#> 0.8307 -0.7669 -1.6040 0.2487 1.5457
#> -1.5150 2.3265 2.6129 0.9229 -4.5403
#>
#> (2,.,.) =
#> -0.8151 -2.1910 -0.7668 1.7592 2.3295
#> -0.0131 0.1066 -0.7571 1.6599 -0.6498
#> 0.5685 -3.0548 -1.0582 -1.8672 5.1993
#>
#> (3,.,.) =
#> 0.2584 0.8887 0.5973 -0.9306 -1.0329
#> 0.2357 0.1567 -0.8951 2.0399 -1.2004
#> 1.7337 1.9915 1.2468 -3.7848 -2.0507
#>
#> (4,.,.) =
#> 0.6965 -0.4939 -1.1063 0.3131 0.7353
#> 1.0965 1.9808 0.3649 -2.3069 -1.4287
#> 1.8218 0.9201 -0.9613 -2.1385 0.2080
#>
#> (5,.,.) =
#> -0.4136 -2.4539 -0.7182 0.9711 2.7323
#> 0.6499 4.0714 -0.1628 -0.4510 -3.7500
#> 0.1350 -1.6332 0.0697 -0.8396 1.9719
#>
#> (6,.,.) =
#> 0.2667 -0.0097 -1.1592 0.4403 0.8256
#> 0.2429 1.9702 0.0767 -0.5251 -1.6359
#> 1.7233 -3.8393 -0.7589 -2.6883 4.7548
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]