Skip to content

[user empathy day 2][based] torch.compile issues #128071

@anijain2305

Description

@anijain2305

🐛 Describe the bug

Pytorch-eager

Time = 2.8532286300323904 seconds for 10 iterations

Torch.compile backend=eager

Observations

  • One frame is getting recompiled and almost never ends. Question - why is Dynamo not giving up on that frame?
    torch/_dynamo/symbolic_convert.py:774] [49/974]
  • Perf is way too bad, mostly due to recompilation

Time = 35.62640808802098 seconds for 10 iterations
Verdict - Does not work

Other fixes

/home/anijain/local/miniconda3/envs/user_empathy_day2/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:661: UserWarning: Graph break due to unsupported builtin flash_attn_2_cuda.PyCapsule.fwd_kvcache. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/docs/main/notes/custom_operators.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
warnings.warn(msg)

torch.compile with backend=aot_eager

Error logs

No response

Minified repro

No response

Versions

NA

cc @ezyang @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @oulgen @jamesjwu @aorenste @laithsakka @bdhirsh @msaroufim

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions