Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8  -4.7696  -2.8940   1.5367  11.9202   6.9588   6.8900  -7.8707  -3.0269
#>    1.9332  -3.6600 -10.3648   4.6100   1.0884   8.1124   4.2168  11.4249
#>    1.6623   2.2792  -0.4219 -14.5193   2.3775   6.3519  -3.5956  -9.9562
#>    2.0600  -0.6124   2.1332   3.9518  -2.8784  -2.9404 -13.9075  -0.5227
#>    4.8003   3.8980   4.8343  11.0290 -14.0322  -0.5301  -0.2151  -8.1132
#>    2.2644  -7.4201  -1.1963  13.7711  -5.3473 -19.8623   6.9019  -0.7380
#>    0.5485  -0.8705   0.7581   5.1667 -13.5690   6.7805   1.6512  10.3643
#>    2.2890  -0.2134   2.0931   8.3469  -5.9853   1.3560   4.0922   6.3439
#>   -5.0548  -1.8819   1.9894  12.8753  -1.6963  -4.3097   1.5606   3.7400
#>    2.5202  -1.9767   3.8232  -9.9251   5.3089 -12.0474   7.0715 -10.8383
#>    0.1718   2.6484   1.7188  -9.9803  13.7730  -5.4827 -18.9986   9.9721
#>   -2.6194   2.3181  -4.4759  -0.2194  -9.1969  16.2701  -3.8546  -6.8419
#>   -2.5503   4.4713  -7.8062   0.6898  -0.6400  -6.8276  15.3370 -16.5785
#>   -7.6800   6.4260   2.6429   4.6031  19.4087 -10.7362 -18.0581   5.4812
#>   -3.6878  -1.4446   3.8696  14.0295  -2.2282   1.5008   3.8754  27.0040
#>   -0.0017   6.3336  -3.5041  -6.8385  -4.5726   6.4297   8.2359   4.0597
#>   -2.4606  -1.4065   4.3271 -10.3850  -4.9211 -17.9213   0.7216   6.2356
#>   -1.5179   3.1900   3.9493   3.1257  -9.6468   1.4326   1.3489   7.5082
#>   -2.2330   0.7512   7.0323  19.7707   7.1260  -2.5093   0.7749   5.0011
#>    0.7939  -7.0845   4.9277  -1.4495   6.0507  -8.1035  10.8001   1.6986
#>    0.3231  -3.1512   4.6537   4.2055   1.0295  -2.7192   8.8490   0.7404
#>    0.5448   2.7722  -0.4173  14.5076   2.8634   4.6111  -3.6748  19.1798
#>    4.5329   0.3070  -4.9178  -5.7158   6.4690  12.7364   9.9333   5.1336
#>    1.8342  -9.8516 -12.1799   8.8505  -0.4872 -12.2198  -4.8120  -1.7788
#>   -3.6077   7.3123  -3.1200  -7.4460  17.9627   4.1050 -10.9884  -2.4110
#>   -1.5733  -9.6317   4.0961 -10.2130  -3.6755   1.9549 -12.3862 -10.9222
#>    0.1818  -6.2733  -4.5201   3.9216   6.8123   5.7648   3.2555   5.7276
#>    0.2800   8.7789  -8.2849   0.2010  -5.7104   9.9855  -0.3018  10.5614
#>   -1.2727   2.9694  -0.8900   1.9983   0.7168  -0.4254  10.1794   1.5611
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]