Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 6  2.7535e+00  5.9692e+00  2.6725e+00 -3.7062e+00 -1.7732e+00 -6.1161e+00
#>   1.4304e+00 -8.3164e+00  7.2640e+00  4.8941e+00 -8.8705e-01 -6.1581e+00
#>   8.6183e+00 -3.5961e+00 -1.0209e+00  1.7172e+01 -2.6374e+00  5.0656e+00
#>  -3.6254e+00 -2.6955e-01  2.4999e+00 -8.7921e+00 -4.6360e+00 -4.5164e+00
#>   5.5594e+00 -1.9959e-01  5.4737e+00 -1.2982e+01  7.7034e+00 -4.5833e-01
#>   2.3431e+00 -1.0186e+01  1.1261e+01 -7.4490e+00 -5.1423e+00 -8.5458e+00
#>  -5.5805e+00 -8.9373e-01  2.7991e-01 -1.0056e+00  5.2032e+00 -3.7231e+00
#>   2.1807e-01  8.8215e+00 -1.2088e+01 -8.8625e+00 -9.6272e-01  3.1105e+00
#>  -2.2604e-01 -7.2921e+00 -1.9748e+00  1.2336e+00  2.4649e+00 -3.0166e+00
#>   3.9835e+00 -4.0318e+00 -6.0965e+00  1.4284e+01  7.9415e+00  1.4486e+01
#>   8.4430e+00  3.1858e+00 -3.3594e+00 -6.9739e+00  6.3874e+00  4.0826e+00
#>   1.9967e+00 -4.2638e+00  6.2495e+00 -4.2494e+00  4.1827e+00  1.8028e+01
#>   3.9198e+00 -4.5880e+00  6.4311e+00  1.4529e+01  6.9985e+00  2.5439e+00
#>   7.9876e+00  1.5879e+00 -7.1566e+00  9.5716e+00  1.9761e+00 -3.6483e+00
#>   2.2045e+00  8.8177e+00 -6.2806e+00  4.9397e+00 -2.2678e-01  1.2802e+01
#>   3.1468e+00 -2.3080e+00 -4.7396e+00  2.3984e+01  3.5655e+00  3.1012e+00
#>   2.9487e+00  1.0067e+01  7.0947e+00  1.1335e+00  1.4323e+00 -5.2705e+00
#>   7.7898e-01 -5.0194e+00 -2.7636e+00  3.0578e+00 -2.4885e+00  6.3557e+00
#>  -1.0407e+00 -4.5731e+00 -1.2929e+01 -2.6952e+00  1.3104e+01 -6.5960e+00
#>   4.4936e+00  2.2614e+00 -3.9683e+00  4.3001e+00  1.2781e+01  7.4431e+00
#>   6.6542e-01  3.7436e+00 -4.0970e+00  3.2974e+00 -2.4583e+01 -7.1216e+00
#>  -3.2995e+00  1.0607e+01 -5.3687e+00 -1.2180e+01 -6.6925e+00 -1.0186e+01
#>   2.3176e+00  3.8276e+00 -5.1010e+00 -9.2848e-01 -1.0954e+01  6.8398e-01
#>   8.6801e+00  1.4138e+01 -2.4689e+00 -8.7505e+00 -3.9415e+00  7.0819e+00
#>  -2.1720e+00 -5.8712e+00  6.1934e-01 -5.8706e+00  1.3777e+01 -1.9475e+00
#>  -3.7415e+00 -6.2899e+00 -1.5521e-01 -2.2570e+01 -2.0052e+01  2.6551e+01
#>   3.3538e+00 -5.9270e+00 -6.7479e+00  8.2007e+00 -8.8242e+00  3.9001e+00
#>   6.8586e+00 -1.6793e+00  4.1731e+00 -9.3926e-01  8.6445e+00 -1.6457e+01
#>  -1.4633e+00 -1.2912e+01  1.3691e+00  2.3296e+00  1.2596e+01 -3.3230e+01
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]