Skip to content

test_sampled_addmm_zero_sized causes CUDA memory exception #72177

@malfet

Description

@malfet

🐛 Describe the bug

Discovered while running #72016 that adds explicit error even if CUDA memory exception caused early termination

See this runlog for example: https://github.com/pytorch/pytorch/runs/5029465594?check_suite_focus=true

Versions

N/A

cc @nikitaved @pearu @cpuhrsch @ngimel

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: crashProblem manifests as a hard crash, as opposed to a RuntimeErrormodule: cudaRelated to torch.cuda, and CUDA support in generalmodule: sparseRelated to torch.sparsetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions