-
Notifications
You must be signed in to change notification settings - Fork 7.9k
Create remember_me column for user sessions #44160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@ahus1, I'm opening this draft PR here as I need your expertise. I'm using PGAdmin to test the indexes, but PostgreSQL seems to refuse to use them. In the queries below, the table has 7000 rows, and the queries match all rows. Query 1With With Query 2With With |
+1 for not migrating the sessions not during Keycloak upgrade. Still, I don't think a premature removal is a valid approach. Instead we would need to lazily update the values as part of the expiry job. I didn't look into the code yet, I might have a more specific suggestion later. |
Did you run ANALYZE for the table first? |
| <changeSet id="26.5.0-add-remember-me" author="keycloak"> | ||
| <addColumn tableName="OFFLINE_USER_SESSION"> | ||
| <column name="REMEMBER_ME" type="BOOLEAN" defaultValue="false"> | ||
| <constraints nullable="false"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To allow for a graceful migration of the user sessions that are remember me vs. non-remember me, I suggest to allow null values in the REMEMBER_ME column. When doing a cleanup of the rows that expired, one would select remember me, non-remember me, and null values. For the null values, it would trigger a conversion, and it could either write the updated remember me value to the database (on "true"), or delete the entry.
Please let me know your thoughts.
I don't like adding extra complexity or cost to the expiration logic, as it will be executed frequently on all nodes.
I did not; I only clicked in the "explain" query button 🤣 |
I hear you. I'd say a java based migration will be too slow when there are millions of entries. Can you think of a plan SQL UPDATE statement that can be part of the liquibase changeset? It would do a lot or database IO (full table scan, updating some rows), but it would not need to do any batch handling in Java. WDYT? Migrating the sessions table is then on the critical path for a Keycloak upgrade, and the table will be locked for changes when doing that. Alternative approache:
|
33b5aff to
ae601bf
Compare
ae601bf to
43f1e02
Compare
Closes keycloak#44112 Signed-off-by: Pedro Ruivo <[email protected]>
Signed-off-by: Pedro Ruivo <[email protected]>
Signed-off-by: Pedro Ruivo <[email protected]>
Signed-off-by: Pedro Ruivo <[email protected]>
43f1e02 to
4940706
Compare
|
rebased and applied spotless rules. |
ahus1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR. Looks good to me, still one item caught my eye. Can you please help me to understand it?
Closes #44112
Migrate
remember_mefrom thedatacolumn to the new columnI didn't want to use
UserSessionProvider#migrate(String)method, because it may slow down the startup process for a large dataset.Instead, I went with a lazy approach; when a session is accessed, it updates the
remember_mecolumn. It should not have any impact on performance because thelast_session_refreshcolumn is updated in every request (no extra database calls). The disadvantage is the premature removal of valid sessions, as theUserSessionPersisterProvider#removeExpired(Realm)method will use the "non remember me" settings.TODO