Skip to content

Conversation

hanzlfs
Copy link
Contributor

@hanzlfs hanzlfs commented Apr 29, 2025

  1. change attention_mgpu.py hard coded float16 type
  2. add dtype for q k v in mgpu_attention_test.py
  3. raise atol / rtol
bazel test  --override_repository=xla=/data/zhonglin/oss/xla    \
                   --config=ci_linux_x86_64_cuda   \
                   --test_env=JAX_NUM_GENERATED_CASES=1   \
                   --//jax:build_jaxlib=true --jobs=8  \
                   --test_env=XLA_PYTHON_CLIENT_ALLOCATOR=platform   \
                   --test_tag_filters=multiaccelerator  \
                    --test_env=JAX_ENABLE_X64=true    \
                    --test_env=JAX_SKIP_SLOW_TESTS=true    \
                    --test_env=PYTHON_GIL=0     \
                    --test_env=JAX_TEST_NUM_THREADS=8      \
                    --local_test_jobs=8    \
                    --test_timeout=600   \
                    --define xnn_enable_avxvnniint8=false   \
                    --define xnn_enable_avx512fp16=false    \
                    --test_tag_filters=-multiaccelerator. \
                    //tests/pallas:mgpu_attention_test_gpu --test_output=all

Copy link

google-cla bot commented Apr 29, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant