Skip to content

Conversation

@sajibreadd-croit
Copy link

@sajibreadd-croit sajibreadd-croit commented Dec 19, 2025

  1. remote link damage identification with reverse parent scrubbing
    • remote link identification becomes tricky if inode is cached.
    • Try to open the link normally, if issue while opening mark as damaged
    • If openned successfully, it can be possible there is damage but inode
      is cached that's why it is succssful while opening. In that case take
      that openned inode, and scrub ancestors recursively. If any of the ancestor
      is damaged it remote link is marked as damaged.
    • while scrubbing some flag is maintained in the inode,
      e.g. whether scrub is backward or forward or both
    • his backward scrubbing will only work in read-only scrub that means
      without repair flag and mds_scrub_hard_link this ceph flag is turned on.
    • A new type of damage introduced, using which multiple links point to same
      inode can be identified, which was not possible previously.
  2. mds_damage_log_to_file and mds_damage_log_file is used to print out damages
    in a file persistently as it's not safe to keep it in memory

Contribution Guidelines

  • To sign and title your commits, please refer to Submitting Patches to Ceph.

  • If you are submitting a fix for a stable branch (e.g. "quincy"), please refer to Submitting Patches to Ceph - Backports for the proper workflow.

  • When filling out the below checklist, you may click boxes directly in the GitHub web UI. When entering or editing the entire PR message in the GitHub web UI editor, you may also select a checklist item by adding an x between the brackets: [x]. Spaces and capitalization matter when checking off items this way.

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands

You must only issue one Jenkins command per-comment. Jenkins does not understand
comments with more than one command.

Comment on lines +244 to +257
} else if (in->scrub_info()->forward_scrub) {
bool added_children = false;
bool done = false; // it's done, so pop it off the stack
scrub_dir_inode(in, &added_children, &done);
if (done) {
dout(20) << __func__ << " dir inode, done" << dendl;
in->set_forward_scrub(false);
dequeue(in);
}
if (added_children) {
// dirfrags were queued at top of stack
it = scrub_stack.begin();
}
Copy link
Author

@sajibreadd-croit sajibreadd-croit Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifed01 regarding this comment #5 (comment)

Shouldn't we call scrub_dir_inode_final(in) here if (!remote_links().empty()) ?

if (!remote_links.empty) then inside scrub_dir_inode it will call scrub_dir_inode_final(in) and handle the backward scrub there.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm... But calling scrub_dir_inode_final() would happen before forward_scrub is cleared so it would be a no-op in regard to remote links, nor?

Comment on lines 457 to 465
if (remote_inode) {
remote_inode->make_path_string(head_path);
}
bool fatal = mdcache->mds->damage_table.notify_remote_link_damaged(ino, path,
head_path);
if (fatal) {
mdcache->mds->damaged();
ceph_abort(); // unreachable, damaged() respawns us
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifed01 regarding this comment #5 (comment)

I'm not sure we should abort at this point in case of scrubbing. May be just log an error and proceed?

damages are stored in memory and there is a threshold mds_damage_table_max_entries is used to maintain the size of damages in the memory. It will be fatal when number of entries gets exceeded. As we will print the damages in file, fatal will always be false.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that but damage_table has a protection against oversized mem usage. So IIUC it just omits new entries if threashold has been reached. So why aborting MDS that's jut scrubbing then? I understand the original behavior when this happens after an attempt to open remote link but this looks redundant for scrubbing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fixed now, thanks.

For scrubbing dirfrag we are pushing children back into the scrub stack. Instead we can follow the same
strategy for scrub directory and pushing children front of the scrub stack, and in kick_off_scrubs always
start scrubbing from the front of the stack. It will prevent ScrubStack to pinning whole level of the file-system
tree.

Fixes: https://tracker.ceph.com/issues/71167
Signed-off-by: Md Mahamudur Rahaman Sajib <mahamudur.sajib@croit.io>
Fixes: https://tracker.ceph.com/issues/68611
Signed-off-by: Md Mahamudur Rahaman Sajib <mahamudur.sajib@croit.io>
1. remote link damage identification with reverse parent scrubbing
   - remote link identification becomes tricky if inode is cached.
   - Try to open the link normally, if issue while opening mark as damaged
   - If openned successfully, it can be possible there is damage but inode
     is cached that's why it is succssful while opening. In that case take
     that openned inode, and scrub ancestors recursively. If any of the ancestor
     is damaged it remote link is marked as damaged.
   - while scrubbing some flag is maintained in the inode,
     e.g. whether scrub is backward or forward or both
   - his backward scrubbing will only work in read-only scrub that means
     without repair flag and mds_scrub_hard_link this ceph flag is turned on.
   - A new type of damage introduced, using which multiple links point to same
     inode can be identified, which was not possible previously.
2. mds_damage_log_to_file and mds_damage_log_file is used to print out damages
   in a file persistently as it's not safe to keep it in memory
Signed-off-by: Md Mahamudur Rahaman Sajib <mahamudur.sajib@croit.io>
@sajibreadd-croit sajibreadd-croit force-pushed the wip-washu-scrub-fix-v18.2.4 branch from 9e16a8e to 14e62d3 Compare January 7, 2026 08:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants