Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8  -2.7172  -5.5210   7.9374   3.0906  21.3580 -12.6189  -4.3258   5.3093
#>    1.3885 -12.0791 -11.1986   7.9128   2.8482   1.6760  -3.1183   7.3039
#>    2.4756  -7.9809  -6.5201  12.1861   7.5359  -6.6329  11.6449  -1.0432
#>    0.4642   0.9392   2.3865  -3.0983  -3.7857  -5.9992  -3.8601 -10.1172
#>   -1.1430   0.7860  -7.5414  -4.1419  -4.2946  -0.9784   3.2830   3.0879
#>    9.2455  -0.1982   1.3323  -4.7387   5.7670   1.2136   3.1590  -2.7698
#>   -2.5656  -3.8974   2.4592  -3.4115  11.9612   2.9927  -4.9330  10.8563
#>    0.8006   2.4989  -4.9836  10.0608  -0.5890   1.9800   8.0270   0.5575
#>    0.9505   3.0838  -2.3031   7.4922   3.3415   4.7355   1.3100   6.0895
#>    5.1837  -7.9399   1.3173   3.6236   0.7248  -6.4314  -0.7245  -9.9812
#>   -0.5216  -1.5727   1.0716  -8.4569  -6.8553  -6.1901   1.4440   4.1978
#>   -1.6952  -8.3790  -6.1057 -13.4262  -2.9027  -8.2298   7.7478  -5.3998
#>    5.6673  -4.3327  -0.4105   3.7569 -10.2672   4.3275  17.1337  15.0232
#>   -1.9419  -4.5694 -11.3853  12.0499   3.9334  -5.0898  -1.2279   0.6043
#>   -6.8327  -3.8372   1.8230 -21.0385  -7.1487   1.6563  -4.2593  -0.3652
#>   -0.9802   1.3503   3.7497   0.7901   2.9292   1.3497   7.8661  -0.7266
#>   -0.8863  -7.5690   4.4432 -14.6916   5.3799 -15.6941 -14.4122 -15.9588
#>   -4.2796  -2.7611  -6.4002   4.9127 -12.0253  -4.3114  -5.5047  -1.5682
#>    4.5966  -1.8511  -7.2233  -8.8994   3.2151  -0.4425  -8.4078  -1.6743
#>    2.8360  -0.9185  -5.6618  24.6701   2.6244  -3.7442   7.8791 -10.1859
#>   -1.5336   0.0708   2.7975   5.0025  -4.7435   2.2739   5.3512  -0.6133
#>    3.8462  -2.4170  14.4165  -5.7716   0.6435  -3.8622   2.6614  -0.1407
#>   -2.0673  -2.9854   3.5136  11.7400  11.0300  -6.6886  -7.5463   7.8790
#>   -1.8618  -6.4786  -1.9332  -5.9230  -0.6521  -9.2760  17.0369  10.8966
#>    5.3522  -4.3344  11.2488   4.6717   0.6802   2.8945   3.6235 -13.5183
#>   -1.6061   3.7639  -4.4445  -6.9303   8.4169   1.2716  -0.9037   6.4759
#>    2.1588  -0.3700  -5.0320  -6.8052   8.9850  -7.2337 -15.6611   2.8269
#>    1.8823   0.5136  17.5939   3.1880  13.8826  20.2181  -2.5064   2.9825
#>    6.7815   0.2294  -6.3192   7.3882  -2.7959   5.7683 -16.8174   3.9043
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]