Skip to content

[FSDP] activation checkpointing with CheckpointImpl.NO_REENTRANT fail on flash-attention GPTLMHeadModel #103726

@nkflash

Description

@nkflash

🐛 Describe the bug

Enable FSDP with activation checkpointing on GPTLMHeadModel.
Got bellow error when I use CheckpointImpl.NO_REENTRANT

Traceback (most recent call last):
  File "train_llama_fsdp_datasets.py", line 219, in <module>
    trainer.do_train(
  File "/home/elrond/code/flagai-internal/flagai/env_trainer_v1.py", line 676, in do_train
    lm_loss, cached = self.train_step_pytorchFSDP(
  File "/home/elrond/code/flagai-internal/flagai/env_trainer_v1.py", line 992, in train_step_pytorchFSDP
    scaler.scale(lm_loss).backward()
  File "/opt/conda/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
    torch.autograd.backward(
  File "/opt/conda/lib/python3.8/site-packages/torch/autograd/__init__.py", line 204, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "/opt/conda/lib/python3.8/site-packages/torch/autograd/function.py", line 274, in apply
    return user_fn(self, *args)
  File "/opt/conda/lib/python3.8/site-packages/flash_attn-1.0.4-py3.8-linux-x86_64.egg/flash_attn/ops/layer_norm.py", line 179, in backward
    x, x0, dmask, gamma, mu, rsigma, rowscale, colscale = ctx.saved_tensors
RuntimeError: !grad_accumulator_.expired() INTERNAL ASSERT FAILED at "../torch/csrc/autograd/saved_variable.cpp":226, please report a bug to PyTorch. No grad accumulator for a saved leaf

If I use CheckpointImpl.REENTRANT instead of CheckpointImpl.NO_REENTRANT, FSDP work well

code like:

            torch.cuda.set_device(self.local_rank)

            layers = set()
            for module in model.modules():
                name = module.__class__.__name__
                for layer in self.fsdp_layers_to_wrap:
                    if re.match(layer, name):
                        layers.add(module.__class__)
            if self.rank == 0:
                print("Wrapped layers", layers)

            auto_wrap_policy = functools.partial(
                transformer_auto_wrap_policy,
                transformer_layer_cls=layers,
            )
            cpu_offload = CPUOffload(offload_params=self.fsdp_cpu_offload)
            self.model = FSDP(model,
                              auto_wrap_policy=auto_wrap_policy,
                              mixed_precision=bfSixteen if self.bf16 else None,
                              sharding_strategy=ShardingStrategy.FULL_SHARD,
                              device_id=torch.cuda.current_device(),
                              backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
                              cpu_offload=cpu_offload)

            non_reentrant_wrapper = functools.partial(
                checkpoint_wrapper,
                checkpoint_impl=CheckpointImpl.NO_REENTRANT,
            )

            apply_activation_checkpointing(model,
                                           checkpoint_wrapper_fn=non_reentrant_wrapper,
                                           check_fn=lambda submodule:
                                           (any(isinstance(submodule, item) for item in layers)))
            if self.load_dir:
                log_dist("loading checkpoints form {}".format(self.load_dir))
                self.sd = load_checkpoint(self.model,
                                          load_dir=self.load_dir,
                                          load_type=self.load_type)

Versions

Collecting environment information...
PyTorch version: 2.1.0.dev20230522+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.1 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.19.6
Libc version: glibc-2.31

Python version: 3.8.8 (default, Feb 24 2021, 21:46:12) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB

Nvidia driver version: 470.129.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7742 64-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 3346.395
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.12
Virtualization: AMD-V
L1d cache: 4 MiB
L1i cache: 4 MiB
L2 cache: 64 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd mba sev ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca

Versions of relevant libraries:
[pip3] flake8==3.7.9
[pip3] numpy==1.19.2
[pip3] nvidia-dlprof-pytorch-nvtx==1.0.0
[pip3] pytorch-lightning==1.6.5
[pip3] pytorch-quantization==2.1.0
[pip3] pytorch-transformers==1.1.0
[pip3] pytorch-triton==2.1.0+7d1a95b046
[pip3] torch==2.1.0.dev20230522+cu117
[pip3] torchaudio==2.1.0.dev20230522+cu117
[pip3] torchdata==0.7.0.dev20230522
[pip3] torchmetrics==0.11.0
[pip3] torchtext==0.16.0.dev20230522+cpu
[pip3] torchvision==0.16.0.dev20230522+cu117
[conda] magma-cuda110 2.5.2 5 local
[conda] mkl 2019.4 243
[conda] mkl-include 2019.4 243
[conda] nomkl 3.0 0
[conda] numpy 1.19.2 py38h6163131_0
[conda] numpy-base 1.19.2 py38h75fe3a5_0
[conda] nvidia-dlprof-pytorch-nvtx 1.0.0 pypi_0 pypi
[conda] pytorch-lightning 1.6.5 pypi_0 pypi
[conda] pytorch-quantization 2.1.0 pypi_0 pypi
[conda] pytorch-transformers 1.1.0 pypi_0 pypi
[conda] pytorch-triton 2.1.0+7d1a95b046 pypi_0 pypi
[conda] torch 2.1.0.dev20230522+cu117 pypi_0 pypi
[conda] torchaudio 2.1.0.dev20230522+cu117 pypi_0 pypi
[conda] torchdata 0.7.0.dev20230522 pypi_0 pypi
[conda] torchmetrics 0.11.0 pypi_0 pypi
[conda] torchtext 0.16.0.dev20230522+cpu pypi_0 pypi
[conda] torchvision 0.16.0.dev20230522+cu117 pypi_0 pypi

cc @ezyang @gchanan @zou3519 @albanD @gqchen @pearu @nikitaved @soulitzer @lezcano @Varal7 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @msaroufim @wconstab @bdhirsh @anijain2305

Metadata

Metadata

Assignees

No one assigned

    Labels

    actionablehigh prioritymodule: autogradRelated to torch.autograd, and the autograd engine in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions