-
Notifications
You must be signed in to change notification settings - Fork 611
Reduce warnings verbosity in tests #7322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/ok to test 430dd00 |
|
/ok to test 9eef068 |
|
/ok to test caeaf09 |
|
/ok to test b448d47 |
|
/ok to test 0412c92 |
|
/ok to test 99abfdd |
jcrist
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not blocking this, just commenting.
Some of these warnings seem like they'd be better squashed by changing our tests to not trigger them.
I also worry about cases where we're filtering warnings based on strings (which can change, especially in upstream code), or the warning filters not being removed when we stop triggering the warning itself.
Do you have suggestions on how to make the filtering here maintainable?
Agreed, I think we can immediately continue with some follow-up work (#7361), but I don't want to delay merging this any further considering that it should already improve the current situation.
This PR ignores the vast majority of warnings but not all and thus represents a best-effort attempt at reducing the warning verbosity as outlined in #6366. We should follow-up with addressing a large number of the currently filtered warnings (#7361), and fail on all warnings by default (#7360) such that we are immediately alerted to changes in warning strings or new warnings. |
jcrist
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM (acknowledging there's followup work here). Just one note.
| assert np.corrcoef(cd.coef_, qn.coef_)[0, 1] > 0.98 | ||
|
|
||
|
|
||
| @pytest.mark.filterwarnings("ignore:Changing solver.*:UserWarning") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed and rewrote this test in #7355, and am a bit surprised not to see a merge conflict here. 🤷 what the merged version will look like or if this decorator is still needed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked locally and it looks like it's applying it correctly:
diff --git a/python/cuml/tests/test_linear_model.py b/python/cuml/tests/test_linear_model.py
index 6eb48d45a..b7d5e76ee 100644
--- a/python/cuml/tests/test_linear_model.py
+++ b/python/cuml/tests/test_linear_model.py
@@ -1115,30 +1115,31 @@ def test_elasticnet_solvers_eq(datatype, alpha, l1_ratio, nrows, column_info):
)
kwargs = {"alpha": alpha, "l1_ratio": l1_ratio}
cd = cuElasticNet(solver="cd", **kwargs)
cd.fit(X_train, y_train)
cd_res = cd.predict(X_test)
qn = cuElasticNet(solver="qn", **kwargs)
qn.fit(X_train, y_train)
# the results of the two models should be close (even if both are bad)
assert qn.score(X_test, cd_res) > 0.90
# coefficients of the two models should be close
assert np.corrcoef(cd.coef_, qn.coef_)[0, 1] > 0.98
+@pytest.mark.filterwarnings("ignore:Changing solver.*:UserWarning")
@given(
algo=st.sampled_from(["eig", "qr", "svd", "svd-qr"]),
n_targets=st.integers(min_value=1, max_value=2),
fit_intercept=st.booleans(),
weighted=st.booleans(),
)
@example(algo="eig", n_targets=1, fit_intercept=True, weighted=False)
@example(algo="qr", n_targets=1, fit_intercept=True, weighted=False)
@example(algo="svd-qr", n_targets=1, fit_intercept=True, weighted=False)
@example(algo="svd", n_targets=1, fit_intercept=True, weighted=False)
@example(algo="svd", n_targets=2, fit_intercept=False, weighted=True)
def test_linear_regression_input_mutation(
algo, n_targets, fit_intercept, weighted
):
"""Check that `LinearRegression.fit`:|
/merge |
Closes #6366.