-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Allow disabling bias for LayerNorm
#101683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/101683
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 9ff21d5: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hey @janEbert, thanks for the PR - is there an associated issue for this? |
For my understanding, is this change related to the porting of RMSNorm from this comment? |
Hey @jbschlosser, I tried querying issues and PRs matching "LayerNorm without bias" and similar, but didn't find anything. I haven't opened an issue for this but can do it if it makes administration easier. The reason to implement this PR is T5-style models as discussed in @mikaylagawarecki's linked issue (which I didn't find in my search). The PaLM paper also mentioned more stable scaling when disabling LayerNorm bias for large models.
Should this be discussed? I personally think |
Hey, any new opinions on this? I'd be really happy to see this merged so that the PyTorch Transformers API becomes more flexible for scaling up. :) |
@janEbert Apologies for the delay, so my understanding (which perhaps you were getting at) is that
This PR is doing and Where and so
And this change is completely separate from |
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
729d8c6
to
9ff21d5
Compare
Thank you so much @mikaylagawarecki, that's an amazing summary that clears up any misunderstandings! |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
As used by T5 and PaLM, citing "increased training stability for large models" (https://arxiv.org/abs/2204.02311). Depends on #101683, which allows disabling bias for `LayerNorm`s. Marked as draft due to this. Pull Request resolved: #101687 Approved by: https://github.com/mikaylagawarecki
Only relevant if
elementwise_affine=True
.