-
Notifications
You must be signed in to change notification settings - Fork 0
Hjiang/handle mulitple ongoing streaming flush #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @dentiny, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request enhances the MooncakeTable
's ability to manage concurrent data flushes, particularly for streaming transactions. It refactors the internal tracking of ongoing flushes from a simple set to a map that maintains a reference count for each Log Sequence Number (LSN). This change ensures that an LSN is only marked as fully flushed when all associated operations, especially multiple parts of a streaming transaction, have completed, preventing premature state transitions and improving the robustness of the flushing mechanism.
Highlights
- Enhanced Flush Tracking: The
ongoing_flush_lsns
data structure inMooncakeTable
has been upgraded from aBTreeSet
to aBTreeMap<u64, u32>
. This allows the system to track not just the presence of an LSN for an ongoing flush, but also the number of operations associated with that LSN. - Support for Multiple Concurrent Flushes: The new reference counting mechanism correctly handles scenarios where multiple streaming flushes might share the same LSN, ensuring that the LSN is only removed from the tracking when all its associated flush operations are complete.
- Refined Flush Management Logic: The
insert_ongoing_flush_lsn
andremove_ongoing_flush_lsn
methods now correctly increment and decrement the LSN counts, respectively, withremove_ongoing_flush_lsn
only removing an LSN when its count drops to zero. - Improved Snapshot Consistency: By accurately tracking all ongoing flushes, the system can maintain better consistency for Iceberg snapshots, as the
get_min_ongoing_flush_lsn
method will always reflect the true minimum LSN that is still pending.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the tracking of ongoing flushes to support multiple flushes for the same LSN, which is a crucial enhancement for handling streaming transactions. The change from BTreeSet
to BTreeMap
to count flushes per LSN is well-executed across the codebase. However, I've identified a critical bug in how the flush count is incremented for streaming transactions, which could lead to a resource leak. I've also included a suggestion to improve code style for better readability and maintainability. Overall, this is a solid improvement with one critical fix needed.
self.flush_disk_slice( | ||
&mut disk_slice, | ||
table_notify_tx, | ||
Some(xact_id), | ||
ongoing_flush_count, | ||
event_id, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ongoing_flush_count
is being passed to flush_disk_slice
, which causes an issue with tracking ongoing flushes. The insert_ongoing_flush_lsn
function adds this count to the total for a given LSN, but remove_ongoing_flush_lsn
only decrements the count by 1. This will lead to a resource leak, as the flush count for an LSN will never reach zero if a streaming transaction is flushed more than once. Each flush operation should increment the global ongoing flush count by 1.
With this change, the ongoing_flush_count
variable at line 540 becomes unused and can be removed.
self.flush_disk_slice( | |
&mut disk_slice, | |
table_notify_tx, | |
Some(xact_id), | |
ongoing_flush_count, | |
event_id, | |
); | |
self.flush_disk_slice( | |
&mut disk_slice, | |
table_notify_tx, | |
Some(xact_id), | |
1, | |
event_id, | |
); |
pub fn get_min_ongoing_flush_lsn(&self) -> u64 { | ||
self.ongoing_flush_lsns | ||
.iter() | ||
.next() | ||
.copied() | ||
.unwrap_or(u64::MAX) | ||
if let Some((lsn, _)) = self.ongoing_flush_lsns.first_key_value() { | ||
return *lsn; | ||
} | ||
u64::MAX | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function can be simplified to be more idiomatic by using map
and unwrap_or
on the Option
returned by first_key_value
.
pub fn get_min_ongoing_flush_lsn(&self) -> u64 { | |
self.ongoing_flush_lsns | |
.iter() | |
.next() | |
.copied() | |
.unwrap_or(u64::MAX) | |
if let Some((lsn, _)) = self.ongoing_flush_lsns.first_key_value() { | |
return *lsn; | |
} | |
u64::MAX | |
} | |
pub fn get_min_ongoing_flush_lsn(&self) -> u64 { | |
self.ongoing_flush_lsns | |
.first_key_value() | |
.map(|(lsn, _)| *lsn) | |
.unwrap_or(u64::MAX) | |
} |
Summary
Briefly explain what this PR does.
Related Issues
Closes # or links to related issues.
Changes
Checklist