Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8   3.3446  -0.5589  -9.7155  21.2827   7.4966   3.8858   4.5367  15.1651
#>    0.5190  -6.0113  -7.3790  -6.9427   4.5090   2.9188   1.2517   9.9546
#>   -5.7147   7.9630  -1.3886  -5.7252  -3.8692   9.7963   0.8721  -7.9099
#>    3.6843   3.3793  -1.8921  11.2348  -0.1970   5.1658  -5.1598  10.0463
#>   -0.8520  -3.8826   1.3681   2.2083   2.8365  -6.0418  -3.8410   8.8341
#>    1.5917  -7.1906   9.5824   6.3744  -3.7134   1.4525  -4.0574   4.1022
#>    5.4699  -1.5467   2.0879  -2.2042   5.1249   0.9730  -7.7534  12.1960
#>   -1.2908  -0.4492 -10.0958  -2.9676   0.5637   4.6634   8.7122 -14.7203
#>    1.4066   0.3660  -5.7472 -17.8495  -9.7526 -14.4885  -3.5490   5.7569
#>   -5.0984   6.9356   7.8609   5.8179  14.2199  18.6194  10.0574  -3.0390
#>    3.4499 -11.6628  11.0548  -0.4680  -7.7721   1.6727   5.9422   5.0017
#>    3.9605   1.9674  -2.3999  14.3751 -13.9414   8.1201  -0.0695  24.7206
#>    1.6579  -5.4531  -7.5128  -4.1803 -12.3555   1.6558   8.9701   7.9299
#>    3.5963   3.2713  -8.3270  13.1966   6.9485  -6.7867  15.9369   7.0284
#>    0.8222  -2.6743   4.6268  -0.9585   5.6845  10.9613 -10.5321   6.3312
#>   -1.1938   1.6254  10.1024  -8.9815   3.8537   3.1725  -0.0848  -7.8358
#>   -3.4020   6.6241 -16.8466  -6.7420  13.2671   1.4891   2.7684   0.8017
#>    1.1391   0.2701   1.7874   6.3353 -26.1624   6.5855  -7.7281  -0.6782
#>   -0.9535   0.4694  -3.6775  -5.6259   7.4310  -3.3808   5.4063  -3.7166
#>   -3.0240   6.5995 -12.0051   6.2883 -13.3160   2.4072   9.4124  -3.4375
#>    4.4537  -1.1592   1.9122  -3.8365   4.5214  -0.7052  -3.0036  -3.7065
#>    4.5150 -10.2202   7.6340  -0.2806   4.7064  -5.8168 -14.6979   0.3250
#>    1.2842  -0.6350   2.7885  16.0302   9.5067  -4.3403   6.9660  -0.9677
#>    3.6582  -1.5848  -1.9037  -2.2788 -10.7937  -4.5096  -4.9518  10.2544
#>    2.4377   2.7401   1.0759   2.6141  11.4452  -0.1923   7.5376   5.3725
#>   -3.1501   6.5475   5.3811  -1.1007   4.6830   5.2095  12.7257   7.5993
#>   -1.8093   2.3718  16.4396   6.8152  15.5689  -9.4049  12.4966  -0.1438
#>   -1.6392  -3.3499  -1.6888   5.4363  15.0454  -8.5460   3.6088   4.4203
#>    1.3541  -4.6444  10.8548 -10.2606  28.7661   0.7793   5.4813 -11.0828
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]