Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8   4.2270   5.0435  -6.3353  -2.4479   8.4927  -6.1353  -9.1178  12.6868
#>   -2.9427   7.0022 -15.1601 -10.2996  -2.5924  -3.7390  -1.6518   2.4021
#>   -2.3510   5.5449  -4.1644  -1.8604  -4.8461  -6.2622  19.9087   2.0569
#>   -9.2569  -4.7835 -10.5825  -5.4597  -4.7332   7.8321   7.1915   6.2045
#>   -5.2800   1.9405   0.2351   5.3755  20.5965  18.0771 -13.9757  -0.4939
#>   -1.4126 -16.6875   2.7980  -9.6152  -0.4819  19.7436  12.3265  -4.8315
#>    9.2663  -0.1116  -6.0854  12.6699   3.5912 -16.2702  11.2045  11.5435
#>    5.5750   1.4520  -1.3390  -7.4680  -2.3386   8.8593   3.3055  14.0850
#>    0.5688   4.1446  13.7972  -9.7780  -2.4835   9.6557  -5.3873 -12.0718
#>    2.0181   2.2470   9.0182   0.0189 -22.3235  -2.6539  -9.1116   6.3429
#>    4.7949   3.0544   7.3213   1.6739   6.4301 -16.8928   1.0165  -7.1461
#>    5.7287  10.7683   4.4464  -4.0700   9.1727  -4.0279   5.2919   4.2868
#>   -3.0581  -1.3749   1.7676  16.6810   3.5666  -3.2760  -1.5714  -5.9691
#>   -3.8267   9.7193  -1.0998 -11.1353  -3.2276 -14.4728   1.7085   2.4879
#>    6.5456  13.3775   0.2992  -4.1042 -10.6194  -0.3149   1.2752  -3.9367
#>   13.1684  17.4268  20.2063   8.4135   3.1918   0.4891  12.7423   2.3491
#>   -7.6872  -3.9823   2.5146  -3.0340 -11.2203   6.1164  17.3029   7.0512
#>    5.9296   5.5207   3.2273   8.0103   3.4446  -7.9687   2.7664 -14.6469
#>    3.1046   6.4510  10.8277  -5.9371  -1.6030  17.7204  -0.2348  14.8925
#>   -8.5329  -3.9996  -6.0822   8.1753 -13.5728 -18.1530 -12.5042   0.6704
#>   -0.1518  -0.1976  -1.0316   3.8487 -12.2593  -2.6938  14.5915  -7.4405
#>    4.0566  -8.2976   1.4823  -9.5261   7.2054  -7.9948   4.6697   3.9695
#>    7.4532   8.0343  -0.3438   0.7933  -2.6172  -0.8538 -10.1818  -6.2209
#>    1.7606  -0.1643   6.7495 -14.5220   2.1626   9.4029 -14.1111  -1.7901
#>    2.6798   1.0328  -0.0210  -4.9384  -1.6294 -17.2422  -6.5565  10.2851
#>   -6.5548  -5.3228  -6.7359  -2.7867 -20.5028   2.1835  -1.1009  -1.0414
#>   -0.5036  12.0141  -6.9928 -16.9730  18.0522  -4.1067   5.2717   9.8320
#>    7.5293  -1.6353   0.3498  -4.8461  12.0565 -11.3932  -2.9709  10.6226
#>   -2.3232  -7.0977   1.7088  -7.4353 -11.8017  -2.3758  -1.3809  -8.2254
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]