Skip to content

Conversation

angelayi
Copy link
Contributor

@angelayi angelayi commented Jun 1, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Jun 1, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/102716

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures

As of commit bf91008:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@angelayi angelayi marked this pull request as draft June 1, 2023 08:20
@angelayi angelayi changed the title [export] Initial Deserialization [export] Initial Deserialization v2 Jun 1, 2023
@angelayi angelayi changed the title [export] Initial Deserialization v2 [export] Initial deserialization v2 Jun 1, 2023
@angelayi angelayi mentioned this pull request Jun 2, 2023
5 tasks
@angelayi angelayi changed the base branch from main to seralize2 June 2, 2023 18:54
@angelayi angelayi marked this pull request as ready for review June 2, 2023 18:55
@github-actions github-actions bot added the release notes: quantization release notes category label Jun 6, 2023
@angelayi angelayi changed the base branch from seralize2 to main June 6, 2023 05:41
@angelayi angelayi removed the request for review from a team June 6, 2023 05:41
@angelayi angelayi requested a review from zhxchen17 June 6, 2023 15:31
ret["stack_trace"] = stack_trace
# Need an explicit None check instead of walrus operator, because
# module_fqn can be the empty string if the node belongs to the root.
# The walrus operator returns False on an empty string :(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol, I don't like the walrus operator anyway. Too many falsy values, including empty lists and dicts as well...

module_fqn = metadata.get("module_fqn")
if module_fqn is not None:
ret["module_fqn"] = module_fqn
# TODO(angelayi) add nn_module_stack and source_fn
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there existing tests for nn_module_stack preservation? Not here, but for the future: might be sweet to run an existing set of tests through serde roundtrip "automatically," say by wrapping export with export + serde with a test-only decorator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I have an upcoming diff once this lands that checks for it.

Copy link
Contributor

@zhxchen17 zhxchen17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel we should leave the symbolic int parts out of the scope of this diff.
For example, we try to deserialize sym ints and bools from the serialized buffer, but I don't think we have a clear plan to serialize them yet, and another example is we're trying to serialize "_operator" ops, but I would simply leave that out until we're more clear what we should do next for them.

Copy link
Contributor

@zhxchen17 zhxchen17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clicked twice

Copy link
Contributor

@zhxchen17 zhxchen17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After discussion offline I think we could move forward for now after addressing all the comments. Next time please consider separating one big PR touching different parts.

@angelayi
Copy link
Contributor Author

angelayi commented Jun 7, 2023

@pytorchbot merge -f "failures appear on master"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Dec 4, 2024
Following #102716, per @angelayi's suggestion.

Note that in general enum as an input is not supported.

repro:
```
class TestEnum(enum.Enum):
    A = auto()
    B = auto()

    @staticmethod
    def from_string(s):
        return TestEnum[s.upper()]

class M(torch.nn.Module):
    def forward(self, x, en):
        return x.clone()

input1 = (
    torch.rand(10, device="cuda"),
    {TestEnum.A: torch.rand(10, device="cuda")},
)
inputs = [input1]
model = M().cuda()

_ = model(*input1)

ep = torch.export.export(model, input1, strict=False)
path = torch._inductor.aot_compile(ep.module(), input1)
```

Differential Revision: D66269157
Pull Request resolved: #141525
Approved by: https://github.com/angelayi
pobin6 pushed a commit to pobin6/pytorch that referenced this pull request Dec 5, 2024
Following pytorch#102716, per @angelayi's suggestion.

Note that in general enum as an input is not supported.

repro:
```
class TestEnum(enum.Enum):
    A = auto()
    B = auto()

    @staticmethod
    def from_string(s):
        return TestEnum[s.upper()]

class M(torch.nn.Module):
    def forward(self, x, en):
        return x.clone()

input1 = (
    torch.rand(10, device="cuda"),
    {TestEnum.A: torch.rand(10, device="cuda")},
)
inputs = [input1]
model = M().cuda()

_ = model(*input1)

ep = torch.export.export(model, input1, strict=False)
path = torch._inductor.aot_compile(ep.module(), input1)
```

Differential Revision: D66269157
Pull Request resolved: pytorch#141525
Approved by: https://github.com/angelayi
@github-actions github-actions bot deleted the deserialize2 branch December 6, 2024 02:11
AmdSampsa pushed a commit to AmdSampsa/pytorch that referenced this pull request Dec 9, 2024
Following pytorch#102716, per @angelayi's suggestion.

Note that in general enum as an input is not supported.

repro:
```
class TestEnum(enum.Enum):
    A = auto()
    B = auto()

    @staticmethod
    def from_string(s):
        return TestEnum[s.upper()]

class M(torch.nn.Module):
    def forward(self, x, en):
        return x.clone()

input1 = (
    torch.rand(10, device="cuda"),
    {TestEnum.A: torch.rand(10, device="cuda")},
)
inputs = [input1]
model = M().cuda()

_ = model(*input1)

ep = torch.export.export(model, input1, strict=False)
path = torch._inductor.aot_compile(ep.module(), input1)
```

Differential Revision: D66269157
Pull Request resolved: pytorch#141525
Approved by: https://github.com/angelayi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: cpu CPU specific problem (e.g., perf, algorithm) release notes: export release notes: quantization release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants