Skip to content

Conversation

@csadorf
Copy link
Contributor

@csadorf csadorf commented Oct 8, 2025

Closes #6366.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Oct 8, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@github-actions github-actions bot added the Cython / Python Cython or Python issue label Oct 8, 2025
@csadorf csadorf added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Oct 15, 2025
@csadorf
Copy link
Contributor Author

csadorf commented Oct 15, 2025

/ok to test 430dd00

@csadorf
Copy link
Contributor Author

csadorf commented Oct 16, 2025

/ok to test 9eef068

@csadorf
Copy link
Contributor Author

csadorf commented Oct 16, 2025

/ok to test caeaf09

@csadorf
Copy link
Contributor Author

csadorf commented Oct 16, 2025

/ok to test b448d47

@csadorf
Copy link
Contributor Author

csadorf commented Oct 16, 2025

/ok to test 0412c92

@csadorf
Copy link
Contributor Author

csadorf commented Oct 17, 2025

/ok to test 99abfdd

@csadorf csadorf marked this pull request as ready for review October 17, 2025 15:56
@csadorf csadorf requested review from a team as code owners October 17, 2025 15:56
Copy link
Member

@jcrist jcrist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not blocking this, just commenting.

Some of these warnings seem like they'd be better squashed by changing our tests to not trigger them.

I also worry about cases where we're filtering warnings based on strings (which can change, especially in upstream code), or the warning filters not being removed when we stop triggering the warning itself.

Do you have suggestions on how to make the filtering here maintainable?

@csadorf
Copy link
Contributor Author

csadorf commented Oct 17, 2025

Some of these warnings seem like they'd be better squashed by changing our tests to not trigger them.

Agreed, I think we can immediately continue with some follow-up work (#7361), but I don't want to delay merging this any further considering that it should already improve the current situation.

I also worry about cases where we're filtering warnings based on strings (which can change, especially in upstream code), or the warning filters not being removed when we stop triggering the warning itself.

Do you have suggestions on how to make the filtering here maintainable?

This PR ignores the vast majority of warnings but not all and thus represents a best-effort attempt at reducing the warning verbosity as outlined in #6366. We should follow-up with addressing a large number of the currently filtered warnings (#7361), and fail on all warnings by default (#7360) such that we are immediately alerted to changes in warning strings or new warnings.

Copy link
Member

@jcrist jcrist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (acknowledging there's followup work here). Just one note.

assert np.corrcoef(cd.coef_, qn.coef_)[0, 1] > 0.98


@pytest.mark.filterwarnings("ignore:Changing solver.*:UserWarning")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I renamed and rewrote this test in #7355, and am a bit surprised not to see a merge conflict here. 🤷 what the merged version will look like or if this decorator is still needed here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked locally and it looks like it's applying it correctly:

diff --git a/python/cuml/tests/test_linear_model.py b/python/cuml/tests/test_linear_model.py
index 6eb48d45a..b7d5e76ee 100644
--- a/python/cuml/tests/test_linear_model.py
+++ b/python/cuml/tests/test_linear_model.py
@@ -1115,30 +1115,31 @@ def test_elasticnet_solvers_eq(datatype, alpha, l1_ratio, nrows, column_info):
     )
 
     kwargs = {"alpha": alpha, "l1_ratio": l1_ratio}
     cd = cuElasticNet(solver="cd", **kwargs)
     cd.fit(X_train, y_train)
     cd_res = cd.predict(X_test)
 
     qn = cuElasticNet(solver="qn", **kwargs)
     qn.fit(X_train, y_train)
     # the results of the two models should be close (even if both are bad)
     assert qn.score(X_test, cd_res) > 0.90
     # coefficients of the two models should be close
     assert np.corrcoef(cd.coef_, qn.coef_)[0, 1] > 0.98
 
 
+@pytest.mark.filterwarnings("ignore:Changing solver.*:UserWarning")
 @given(
     algo=st.sampled_from(["eig", "qr", "svd", "svd-qr"]),
     n_targets=st.integers(min_value=1, max_value=2),
     fit_intercept=st.booleans(),
     weighted=st.booleans(),
 )
 @example(algo="eig", n_targets=1, fit_intercept=True, weighted=False)
 @example(algo="qr", n_targets=1, fit_intercept=True, weighted=False)
 @example(algo="svd-qr", n_targets=1, fit_intercept=True, weighted=False)
 @example(algo="svd", n_targets=1, fit_intercept=True, weighted=False)
 @example(algo="svd", n_targets=2, fit_intercept=False, weighted=True)
 def test_linear_regression_input_mutation(
     algo, n_targets, fit_intercept, weighted
 ):
     """Check that `LinearRegression.fit`:

@csadorf
Copy link
Contributor Author

csadorf commented Oct 17, 2025

/merge

@rapids-bot rapids-bot bot merged commit 8882188 into rapidsai:branch-25.12 Oct 17, 2025
101 checks passed
@csadorf csadorf deleted the issue/6366 branch October 17, 2025 19:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Cython / Python Cython or Python issue improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cleanup warnings in test suite

3 participants