Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8  -3.7970   0.0306  -7.4101   0.5487   0.0371   4.3148  17.2155   3.1767
#>    4.1434  -5.0221 -10.4293   5.7540   5.5839  15.3002   7.5529 -11.4765
#>    2.6183  -0.5493   3.3843  -2.5425  -0.2725  11.1694  -4.5535  -0.8787
#>    3.0417  -3.5166   0.9537  12.4083 -22.2861   2.2683  -5.6730  -6.2020
#>    2.1646  -5.1454   5.2118  -6.6245 -10.1774  -3.4324  -2.0887  -8.2300
#>   -5.9577  -6.4750  -3.3738  -9.1615   6.7858  -5.5065  15.3239 -11.6739
#>   -1.2773  -5.4539 -10.3425   6.8600   4.9792  -3.9591   5.7845 -11.7479
#>   -0.3564  -1.0379  -0.6918  11.8887   0.6291   3.9001   7.0605  -6.1026
#>    1.2340   0.1320   8.7304  -3.7244 -19.7005  -5.4760  -9.2298  15.7800
#>   -7.8940   1.1021   5.3811   1.2006  12.6921   6.9255  -3.6718  -8.3819
#>   10.4269  -0.1915  -1.2430   4.9537  -6.4828 -11.2792  10.4432  -7.9728
#>   -1.5812 -12.4732   0.9906  -2.6339  14.5224   3.9006  -9.1899  14.9470
#>    2.7218  -3.3650   7.7006   4.8242   1.7608  -3.2149  -4.9169  -2.6257
#>    2.5254   0.0641  -7.2430  -6.4259  -6.0820  10.7569   7.1801   2.0341
#>    4.4664   4.3091   7.9897   9.4205   4.3075   7.0819  20.3309  11.2183
#>    2.9179  -6.4646  -3.6986  -3.0834   3.2651  -8.0544  -3.6222   7.0325
#>    2.9445   0.9819   3.7767   6.7219   0.9007   2.3822   1.7560   1.9586
#>   -3.0140  -0.0712  -8.1481 -12.0154   0.7664  -1.0021  -5.7423   3.6251
#>    3.4531  -7.4723  -7.1601   0.9353 -10.0990  16.7257  -7.6331   9.0751
#>    1.0915  -6.1857   0.7525  -3.6888   1.4935  -6.2711   1.7286   1.8451
#>   -7.5245   4.1856   0.3041 -12.5019   8.1194 -10.9233  12.1739 -17.1902
#>    6.7848   6.9474   4.5384  -0.9641 -10.8215  -0.1875  -8.6932  15.1684
#>   -2.2501   0.2668   4.5704   7.0512   7.1663   5.0249   0.3187  -7.0778
#>    1.8420  -3.4783   1.0221  -4.9485   1.0261  25.6142 -15.4773   5.9593
#>    0.9120  -1.9754 -12.3742  -3.7374  -6.0467  -6.1327   0.5563 -14.7642
#>   -0.4196   5.3070  -1.8569 -16.2045 -13.1267  -6.6503   0.5715 -13.2106
#>    0.0047   1.7316 -12.6317  -5.8735   1.2397   2.4749   6.5785 -11.0904
#>   -0.9311  10.4083  -0.5375  -2.7337   9.6201   0.8769  12.7970  -4.8749
#>   -3.9516  11.9967  -4.8738 -13.4600  -0.1229 -18.9877  24.1917  -7.8215
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]