-
Notifications
You must be signed in to change notification settings - Fork 25.5k
Open
Labels
dynamo-symbolic-analysisenhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: dynamooncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
This example:
mylist = []
@torchdynamo.optimize(...)
def foo():
mylist.append(torch.randn(10) + 1)
foo()
foo()
foo()
We will compile 3 versions of foo:
- One guarding that mylist is len 0
- One guarding that mylist is len 1 (and has one matching tensor)
- One guarding that mylist is len 2 (and has two matching tensor)
Really, only 1 is version is needed.
Similarly. A different function calling mylist.clear()
will guard on the contents of the list, despite never looking at the contents.
This may cause an issue for python autograd, as currently TorchDynamo will guard on the entire contents of the tape.
cc @ezyang @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @soumith @msaroufim @wconstab @ngimel @mlazos @yanboliang @Xia-Weiwen @desertfire @zdevito
Metadata
Metadata
Assignees
Labels
dynamo-symbolic-analysisenhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: dynamooncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module