Skip to content

HDDS-15147. Improve Recon Datanode Insight export: filename, size unit#10177

Open
navinko wants to merge 6 commits intoapache:masterfrom
navinko:HDDS-15147
Open

HDDS-15147. Improve Recon Datanode Insight export: filename, size unit#10177
navinko wants to merge 6 commits intoapache:masterfrom
navinko:HDDS-15147

Conversation

@navinko
Copy link
Copy Markdown
Contributor

@navinko navinko commented May 2, 2026

What changes were proposed in this pull request?

  • Updated Server side Datanode insight, export filename generation with clusterId appended with timestamp.
    Fomat used: Datanode_Insights_ClusterId_timestamp
    Example: Datanode_Insights_CID-ec3d0c23-7863-4f94-9e48-78134b891a62_20260502_1115.csv
  • Unit of header kept as Byte
  • Fixed PMD failures

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-15147

How was this patch tested?

Verified with updated unit test class and dev tool downloaded filename.

image

Datanode_Insights_CID-6ece86f7-d869-411e-832c-1dad626c0991_20260505_1314.csv

Successful CI : https://github.com/navinko/ozone/actions/runs/25384413555

Copy link
Copy Markdown
Contributor

@sreejasahithi sreejasahithi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @navinko , left few minor comments

String timestamp = LocalDateTime.now().format(formatter);

// Retrieve clusterId from ReconContext
String clusterName = "UnknownCluster";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit : better to name is as ClusterID

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done , used clusterID instead of ClusterID bcz checkstyle validations warns for non camel case.

"Name 'ClusterID' must match pattern '^[a-z][a-zA-Z0-9]*$'.""

);

return ReconUtils.downloadCsv("datanode_storage_and_pending_deletion_stats.csv", headers, data, columns);
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyyMMdd_HHmm");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be made to a private static final field in the class

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

v -> v.getReport() != null ? formatBytesToGB(v.getReport().getCommitted()) : -1,
v -> v.getReport() != null ? formatBytesToGB(v.getReport().getReserved()) : -1,
v -> v.getReport() != null ? formatBytesToGB(v.getReport().getMinimumFreeSpace()) : -1,
v -> v.getReport() != null ? formatBytesToGB(v.getMetric().getPendingBlockSize()) : -1
Copy link
Copy Markdown
Contributor

@sreejasahithi sreejasahithi May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here for getting PendingBlockSize it would be better if add a guard for v.getMetric() != null since we are reading the value form the metric.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done , good catch ..It was missed across prev reviews as well.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sreejasahithi for reviewing .

@adoroszlai adoroszlai changed the title HDDS-15147. [Cluster Capacity] Standardize Datanode Insight Export Fi… HDDS-15147. Improve Recon Datanode Insight export: filename, size unit May 4, 2026
@priyeshkaratha
Copy link
Copy Markdown
Contributor

priyeshkaratha commented May 5, 2026

@arunsarin85 @navinko
I’d prefer to keep the exported sheet values in bytes only. The primary purpose of this CSV report is to enable accurate comparisons and validate calculations, so exact byte-level values are more useful. If users want to view the data in GB, they can apply their own column conversions as needed. So it’s better not to convert the values to GB in the report. Renaming the columns would be fine.
cc : @ChenSammi @devmadhuu what is your thought on this?

@navinko
Copy link
Copy Markdown
Contributor Author

navinko commented May 5, 2026

Thanks @priyeshkaratha i do agree with you.
Converting to GB will make loss of precision for smaller bytes .
In this PR have used , Byte to GB conversion upto 2 decimal places and in exported sheet it will be 0.00 as used space for 4.2 MB on UI , unless we accept that it will lead to confusions and better to go with byte approach on csv.

@arunsarin85
Copy link
Copy Markdown
Contributor

bytes only.

@arunsarin85 @navinko I’d prefer to keep the exported sheet values in bytes only. The primary purpose of this CSV report is to enable accurate comparisons and validate calculations, so exact byte-level values are more useful. If users want to view the data in GB, they can apply their own column conversions as needed. So it’s better not to convert the values to GB in the report. Renaming the columns would be fine. cc : @ChenSammi @devmadhuu what is your thought on this?

@priyeshkaratha Agreed ! Keeping it in bytes makes more sense.

@navinko
Copy link
Copy Markdown
Contributor Author

navinko commented May 5, 2026

Thanks @priyeshkaratha for review. Addressed review comments .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants