Skip to content

Conversation

maybeLee
Copy link

The scheme between torch.nn.functional.Pad and onnx's pad is different if the pads receive 8-dimensional padding.
Given input with size: (1,3,10,10) and pads=(1,1,2,2,3,3,4,4), F.Pad will result in size (9,9,14,12) (as declared in your test scripts). However, onnx will output result with size: (5,7,16,16) following their documentation.

Therefore, the pads parameter loaded from onnx_model.graph should be transformed to the PyTorch version so the padding size is correct.

Unfortunately, I find that the pads parameter will be placed in onnx_model.graph.initializer instead of the node's parameter, so a simple preprocess of Pad nodes' parameter is not feasible :(.

So I have to add an additional branch (which is ugly...) when loading the initializer parameter: if the targeting node is Pad we will check if the pads parameter needs to be preprocessed.

I write a program to exhibit this bug:
Through this code snippet, we can see that the correct output shape should be: (batch_size, 226, 226, 3) but ONNX2PyTorch will output (batch_size+1, 225, 225, 3)

Current fix can pass all existing tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant