Skip to content

feat(csharp): fix Phase 1 Thrift telemetry gaps with E2E test coverage#345

Open
jadewang-db wants to merge 23 commits intomainfrom
stack/pr-phase1-thrift-telemetry-gaps
Open

feat(csharp): fix Phase 1 Thrift telemetry gaps with E2E test coverage#345
jadewang-db wants to merge 23 commits intomainfrom
stack/pr-phase1-thrift-telemetry-gaps

Conversation

@jadewang-db
Copy link
Collaborator

@jadewang-db jadewang-db commented Mar 13, 2026

🥞 Stacked PR

Use this link to review incremental changes.


Summary

Closes telemetry gaps in the Thrift (HiveServer2) code path by populating missing fields in telemetry events and adding comprehensive E2E test coverage.

Telemetry field fixes

  • System configuration: Populate runtime_vendor and client_app_name in DriverSystemConfiguration
  • Auth type: Populate auth_type on the root telemetry log
  • Workspace ID: Populate WorkspaceId in TelemetrySessionContext
  • Connection parameters: Expand DriverConnectionParameters with additional fields
  • Chunk metrics: Add ChunkMetrics aggregation in CloudFetchDownloader, expose via CloudFetchReader interface, and call SetChunkDetails() in DatabricksStatement.EmitTelemetry()
  • Retry count: Track retry_count in SqlExecutionEvent
  • Internal calls: Mark internal calls with is_internal_call flag
  • Metadata operations: Add telemetry for GetObjects and GetTableTypes

E2E test infrastructure & coverage

  • Build CapturingTelemetryExporter test infrastructure for intercepting and asserting on telemetry events
  • Add E2E tests for: baseline telemetry fields, system configuration, auth type, workspace ID, connection parameters, chunk details, chunk metrics aggregation, chunk metrics reader, retry count, internal calls, and metadata operations

Test plan

  • All new E2E telemetry tests pass locally
  • CI checks pass (C# builds on all platforms, lint, e2e tests)
  • Integration tests pass via merge queue

@jadewang-db jadewang-db changed the title Build E2E test infrastructure with CapturingTelemetryExporter\n\nTask ID: task-1.1-e2e-test-infrastructure test(csharp): build E2E test infrastructure with CapturingTelemetryExporter Mar 13, 2026
@jadewang-db jadewang-db changed the title test(csharp): build E2E test infrastructure with CapturingTelemetryExporter feat(csharp): fix Phase 1 Thrift telemetry gaps with E2E test coverage Mar 13, 2026
@jadewang-db
Copy link
Collaborator Author

Code Review — PR #345

Executive Summary

This PR fills telemetry gaps in the Thrift code path with comprehensive E2E test coverage. The telemetry field additions are well-structured, but there are concerns around test isolation with the static ExporterOverride, premature telemetry emission in metadata operations, and significant code duplication in both production and test code.

Severity Summary

Severity Count
Critical 1
High 7
Medium 10
Low 8

Positive Observations

  • Excellent test coverage breadth -- 12 dedicated test files covering each telemetry field
  • Clean ChunkMetrics abstraction with proper encapsulation behind interfaces
  • Good use of CapturingTelemetryExporter pattern for test observability
  • Rust FFI cleanup is thorough -- no dangling references

Critical

1. ExporterOverride changed from AsyncLocal to static -- breaks test isolation
csharp/src/Telemetry/TelemetryClientManager.cs:54

The previous implementation used AsyncLocal<ITelemetryExporter?> specifically so the override was scoped to the current async context and would not leak across concurrent test classes. Replacing it with a plain static property means:

  1. Concurrent test classes that set ExporterOverride will clobber each other
  2. Because clients are cached per-host, a test that sets ExporterOverride after another test has already cached a client will get the wrong exporter

Suggestion: Restore AsyncLocal semantics, or ensure all telemetry test classes are in the same [Collection("Telemetry")] to force sequential execution.


High

2. GetObjects/GetTableTypes emit ResultsConsumed telemetry before results are consumed
csharp/src/DatabricksConnection.cs:516-548 and ~616-648

RecordResultsConsumed() is called in the finally block immediately after creating the IArrowArrayStream. The caller has not yet iterated the stream, so ResultsConsumedMs measures stream creation time, not consumption time. Contrast with DatabricksStatement, which correctly defers to Dispose().

3. _pendingTelemetryContext silently overwritten on repeated Execute calls
csharp/src/DatabricksStatement.cs:145

If ExecuteQuery() is called twice on the same statement, the second call overwrites _pendingTelemetryContext without emitting the first. Telemetry data is silently lost.

4. Initial chunk latency uses first-to-complete rather than first-chunk semantics
csharp/src/Reader/CloudFetch/CloudFetchDownloader.cs:728-732

_initialChunkLatencyMs records the download time of whichever parallel download finishes first, not necessarily chunk 0. If the metric should be "time to download the first logical chunk," track by chunk index instead of completion order.

5. Tests silently pass when no telemetry is captured
csharp/test/E2E/Telemetry/TelemetryBaselineTests.cs:459, ~520

When logs.Count == 0, the test writes a warning and returns -- effectively passing. If the error telemetry path is broken, these tests provide no signal. Use Assert.NotEmpty(logs) or Skip.If(...).

6. RetryCountTests uses Thread.Sleep instead of async polling helper
csharp/test/E2E/Telemetry/RetryCountTests.cs:80

All other test files use WaitForTelemetryEvents(). This file uses Thread.Sleep(1000) and does not use any shared test helpers (CreateConnectionWithCapturingTelemetry, GetProtoLog), making it inconsistent and fragile.

7. Blocking .Result calls on async methods risk deadlocks
csharp/test/E2E/Telemetry/MetadataOperationTests.cs:63, 116, 169, 222, 269, 334

stream.ReadNextRecordBatchAsync().Result is called synchronously. Use await instead to avoid potential deadlocks.


Medium

8. InternalCallTests does not assert the internal call flag
csharp/test/E2E/Telemetry/InternalCallTests.cs:48-110

Despite the name InternalCall_UseSchema_IsMarkedAsInternal, the test never asserts IsInternalCall == true. It iterates and prints values but the core assertion is missing.

9. Missing TimestampMillis in metadata telemetry Context
csharp/src/DatabricksConnection.cs:530, ~630

The TelemetryFrontendLog for GetObjects/GetTableTypes does not set Context with TimestampMillis, unlike DatabricksStatement.EmitTelemetry(). Events will lack timestamps.

10. ~120 lines of duplicated telemetry boilerplate
csharp/src/DatabricksConnection.cs:427-665

GetObjects and GetTableTypes contain nearly identical telemetry code. Extract into a shared helper like ExecuteWithMetadataTelemetry<T>(OperationType, Func<T>).

11. Process.GetCurrentProcess() called twice without caching
csharp/src/DatabricksConnection.cs:986, 995

Allocates a Process object each time. Cache in a local variable.

12. ChunkMetricsAggregationTests assumes CloudFetch without skip guard
csharp/test/E2E/Telemetry/ChunkMetricsAggregationTests.cs:80

Unlike ChunkDetailsTelemetryTests, no skip guard for inline results. Also largely duplicated with ChunkDetailsTelemetryTests.cs.

13. Reflection-based testing is fragile
csharp/test/E2E/Telemetry/ChunkMetricsReaderTests.cs:384-425

Property names accessed via reflection will throw at runtime if renamed. Use [InternalsVisibleTo] instead.

14. Reader resource leak on assertion failure
csharp/test/E2E/Telemetry/ChunkMetricsReaderTests.cs:70

reader?.Dispose() is in the try block, not finally. If an assertion fails before Dispose(), the reader leaks.

15. C# REST integration tests no longer auto-triggered
.github/workflows/trigger-integration-tests.yml

SEA/REST dispatch logic removed. Confirm REST testing is covered elsewhere.

16. Six ResultReaderAdapter unit tests deleted
rust/src/reader/mod.rs

Tests for iteration, error propagation, and schema error handling were removed with test_utils. Consider preserving with inline mocks.

17. _openSessionResp retains reference to large Thrift response
csharp/src/DatabricksConnection.cs:770

Stored solely for workspace ID extraction during telemetry init. After init, the reference keeps the entire session response alive. Set to null after extracting the needed value.


Low

  • 18. ChunkMetrics has public setters -- consider init setters for immutability (ChunkMetrics.cs:29)
  • 19. Two statements on one line in test code reduces readability (TelemetryBaselineTests.cs:61)
  • 20. AuthType_NoAuth_SetsToOther catches bare catch -- should be catch (Exception) (AuthTypeTests.cs:234)
  • 21. Unused using directives in RetryCountTests.cs (System.Net, System.Net.Http)
  • 22. RuntimeVendor hardcoded to "Microsoft" -- inaccurate on Mono (DatabricksConnection.cs:983)
  • 23. IncrementTotalChunksPresent uses lock where Interlocked.Increment would suffice (CloudFetchDownloader.cs:749)
  • 24. Hardcoded "adbc.databricks.batch_size" string in tests -- use constant from DatabricksParameters (ChunkMetricsReaderTests.cs:57)
  • 25. Task.Delay(500) in InternalCallTests -- use WaitForTelemetryEvents helper instead

Recommendations (prioritized)

  1. Must fix before merge: Restore AsyncLocal or add [Collection] for test isolation (chore: update for repo standards #1)
  2. Should fix: Add missing assertion in InternalCallTests (feat(csharp): add Statement Execution API configuration and models #8), fix silent-pass tests (ci(deps): Bump actions/setup-dotnet from 4 to 5 #5)
  3. Should fix: Use await instead of .Result (ci(deps): Bump amannn/action-semantic-pull-request from 5 to 6 #7), use WaitForTelemetryEvents in RetryCountTests (ci(deps): Bump actions/github-script from 7 to 8 #6)
  4. Consider: Defer telemetry emission in GetObjects/GetTableTypes (feat(csharp/src/Drivers/Databricks): Add CI/CD for driver and make it build #2), add timestamps (feat(csharp): implement StatementExecutionClient REST API layer #9)
  5. Nice to have: Reduce duplication (fix(csharp): fix CloudFetchDownloader test  #10, fix(csharp): add solution file for c# #12), improve resource cleanup (refactor(csharp): make CloudFetch pipeline protocol-agnostic #14)

@jadewang-db jadewang-db marked this pull request as ready for review March 13, 2026 21:49
@jadewang-db
Copy link
Collaborator Author

Addressed the following findings from the code review in commit ebb8f6c:

Fixed:

Acknowledged (to address in follow-up):

Jade Wang and others added 19 commits March 13, 2026 22:04
Comprehensive gap analysis of telemetry proto field coverage including:
- SEA connections have zero telemetry (highest priority)
- ChunkDetails.SetChunkDetails() defined but never called
- Missing fields: auth_type, WorkspaceId, runtime_vendor, client_app_name
- Composition via TelemetryHelper chosen over abstract base class
- E2E test strategy for all proto fields across both protocols

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ion\n\nTask ID: task-1.2-system-config-missing-fields
…Task ID: task-1.11-metadata-operation-telemetry
The demo directory was tracked as a git submodule in the index but had no
entry in .gitmodules, causing all CI jobs to fail at checkout.

Co-authored-by: Isaac
…remove scratch files

- Remove scratch/design docs not meant for PR (PHASE1_TEST_RESULTS.md,
  TELEMETRY_TIMING_ISSUE.md, fix-telemetry-gaps-design.md)
- Remove accidentally committed backup file (DatabricksConnection.cs.backup)
- Fix trailing whitespace in test and doc files
- Fix license header in ChunkMetrics.cs (was using "modified" header for new file)

Co-authored-by: Isaac
… resource cleanup

- Restore AsyncLocal for ExporterOverride to prevent parallel test interference
- Add missing IsInternalCall assertion in InternalCallTests
- Replace silent-pass with Skip.If when no telemetry captured in baseline tests
- Use await instead of .Result for async calls in MetadataOperationTests
- Add TimestampMillis to metadata operation telemetry Context
- Cache Process.GetCurrentProcess() call in BuildSystemConfiguration
- Move reader disposal to finally blocks in ChunkMetricsReaderTests

Co-authored-by: Isaac
@jadewang-db jadewang-db force-pushed the stack/pr-phase1-thrift-telemetry-gaps branch from ebb8f6c to d504b0a Compare March 13, 2026 22:08
@jadewang-db
Copy link
Collaborator Author

Range-diff: stack/fix-telemetry-gaps-design (ebb8f6c -> d504b0a)
csharp/src/DatabricksStatement.cs
@@ -9,7 +9,7 @@
  using AdbcDrivers.Databricks.Result;
  using AdbcDrivers.Databricks.Telemetry;
  using AdbcDrivers.Databricks.Telemetry.Models;
-         private bool runAsyncInThrift;
+         private bool enableComplexDatatypeSupport;
          private Dictionary<string, string>? confOverlay;
          internal string? StatementId { get; set; }
 +        private QueryResult? _lastQueryResult; // Track last query result for telemetry chunk metrics
csharp/src/Telemetry/TelemetryClientManager.cs
@@ -1,30 +0,0 @@
-diff --git a/csharp/src/Telemetry/TelemetryClientManager.cs b/csharp/src/Telemetry/TelemetryClientManager.cs
---- a/csharp/src/Telemetry/TelemetryClientManager.cs
-+++ b/csharp/src/Telemetry/TelemetryClientManager.cs
- */
- 
- using System.Collections.Generic;
-+using System.Threading;
- using System.Threading.Tasks;
- 
- namespace AdbcDrivers.Databricks.Telemetry
-         private readonly Dictionary<string, TelemetryClientHolder> _clients = new Dictionary<string, TelemetryClientHolder>();
-         private readonly object _lock = new object();
- 
-+        private static readonly AsyncLocal<ITelemetryExporter?> s_exporterOverride = new AsyncLocal<ITelemetryExporter?>();
-+
-         /// <summary>
-         /// Optional exporter override for testing. When set, newly created TelemetryClients
-         /// use this exporter instead of the default DatabricksTelemetryExporter pipeline.
-+        /// Uses AsyncLocal to prevent interference across parallel xUnit test classes.
-         /// Must be set before connections are opened and cleared after tests complete.
-         /// </summary>
--        internal static ITelemetryExporter? ExporterOverride { get; set; }
-+        internal static ITelemetryExporter? ExporterOverride
-+        {
-+            get => s_exporterOverride.Value;
-+            set => s_exporterOverride.Value = value;
-+        }
- 
-         /// <summary>
-         /// Internal constructor. Public API uses GetInstance() for the singleton.
\ No newline at end of file

Reproduce locally: git range-diff 094dc6e..ebb8f6c 425a554..d504b0a | Disable: git config gitstack.push-range-diff false

/// Gets or sets the maximum time taken to download any single chunk in milliseconds.
/// Identifies the slowest chunk download, useful for identifying performance outliers.
/// </summary>
public long SlowestChunkLatencyMs { get; set; }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this makes sense? Each chunk can have different size so maybe better to chunk the raw speed? i.e. chunk size/ total time

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point — SlowestChunkLatencyMs is less meaningful when chunk sizes vary. I'll add a throughput-based metric in a follow-up. For now, SumChunksDownloadTimeMs / TotalChunksIterated gives average latency, and we track TotalChunksPresent + TotalChunksIterated for volume. Adding per-chunk size tracking would require changes to the download pipeline to capture byte counts, which I'd prefer to scope separately.

{
try
{
telemetryContext.RecordFirstBatchReady();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here metadata query

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Done — Same fix applied. Both GetObjects and GetTableTypes now use the shared ExecuteWithMetadataTelemetry<T>() helper with no batch/consumption timing.

}

// Strategy 2: Check connection property as fallback
if (workspaceId == 0 && Properties.TryGetValue("adbc.databricks.workspace_id", out string? workspaceIdProp))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think we ever support this param: adbc.databricks.workspace_id

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should get from this header:
x-databricks-org-id: xxx

Image

This header exists for both Thrift call response and http call response, we can get this from:

  • Talking with feature-flag service endpoint
  • Talking with metric endpoint

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Done — Removed the adbc.databricks.workspace_id property lookup and the config-based extraction. Now using PropertyHelper.ParseOrgIdFromProperties(Properties) which extracts the org ID from the HTTP path query string (e.g., ?o=12345). This is the same org ID used for the x-databricks-org-id header elsewhere in the driver.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this look up still exist? adbc.databricks.workspace_id

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extract org ID from Http path would only work for SPOG url where there is org id, not for current legacy urls.
I think best chance is still to extract the orgId from opensession response http header.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in telemetry reporting, there is no need for workspace id, we should remove the logic

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Done — Good point about SPOG vs legacy URLs. Replaced PropertyHelper.ParseOrgIdFromProperties() with a new OrgIdCaptureHandler (DelegatingHandler) that captures the x-databricks-org-id header from the first successful HTTP response. This works for both SPOG and legacy URLs since the header is always present in Thrift call responses. Also removed the test that used the unsupported adbc.databricks.workspace_id connection property.


This comment was generated with GitHub MCP.

/// <returns>The batch size value.</returns>
private int GetBatchSize()
{
const int DefaultBatchSize = 50000; // HiveServer2Connection.BatchSizeDefault
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Databricks default is 2M:
private const long DatabricksBatchSizeDefault = 2000000;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Done — Fixed default to 2000000 to match DatabricksStatement.DatabricksBatchSizeDefault.

private bool enableComplexDatatypeSupport;
private Dictionary<string, string>? confOverlay;
internal string? StatementId { get; set; }
private QueryResult? _lastQueryResult; // Track last query result for telemetry chunk metrics
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some metadata queries in DatabricksStatmeent like GetTablesAsync etc, do we plan to emit telemetry for thos?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The metadata queries in DatabricksStatement (like GetTablesAsync, GetColumnsAsync) go through the Thrift execute path and already pick up telemetry from _pendingTelemetryContext in the statement lifecycle. The GetObjects/GetTableTypes overrides in DatabricksConnection handle the ADBC-level metadata API calls that bypass the statement path. So both paths are covered.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong, for metadata calls like GetTablesAsync/GetColumnsAsync they route to HiveServer2Connection, which directly call Thrift metadata: https://github.com/adbc-drivers/hiveserver2/blob/825f9b96b810c9fbbcfa452392b1f4f903547568/csharp/src/Hive2/HiveServer2Connection.cs#L1049, so it does not go through the executeQuery path in this file

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please verify this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right — GetTablesAsync/GetColumnsAsync route directly to HiveServer2Connection Thrift metadata methods, not through DatabricksStatement.ExecuteQuery. My earlier reply was incorrect. Those metadata calls don't go through the statement telemetry path. I'll file a follow-up to add telemetry for the HiveServer2-level metadata calls.


This comment was generated with GitHub MCP.

Copy link
Collaborator Author

@jadewang-db jadewang-db left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Submitting pending review to unblock comment replies.

@jadewang-db
Copy link
Collaborator Author

Range-diff: stack/fix-telemetry-gaps-design (d504b0a -> 6a928e7)
csharp/src/DatabricksConnection.cs
@@ -22,125 +22,18 @@
 +            IReadOnlyList<string>? tableTypes,
 +            string? columnNamePattern)
 +        {
-+            return this.TraceActivity(activity =>
++            var operationType = depth switch
 +            {
-+                // Determine operation type based on depth
-+                Telemetry.Proto.Operation.Types.Type operationType = depth switch
-+                {
-+                    GetObjectsDepth.Catalogs => Telemetry.Proto.Operation.Types.Type.ListCatalogs,
-+                    GetObjectsDepth.DbSchemas => Telemetry.Proto.Operation.Types.Type.ListSchemas,
-+                    GetObjectsDepth.Tables => Telemetry.Proto.Operation.Types.Type.ListTables,
-+                    GetObjectsDepth.All => Telemetry.Proto.Operation.Types.Type.ListColumns,
-+                    _ => Telemetry.Proto.Operation.Types.Type.Unspecified
-+                };
-+
-+                // Create telemetry context for this metadata operation
-+                StatementTelemetryContext? telemetryContext = null;
-+                try
-+                {
-+                    if (TelemetrySession?.TelemetryClient != null)
-+                    {
-+                        telemetryContext = new StatementTelemetryContext(TelemetrySession)
-+                        {
-+                            StatementType = Telemetry.Proto.Statement.Types.Type.Metadata,
-+                            OperationType = operationType,
-+                            ResultFormat = Telemetry.Proto.ExecutionResult.Types.Format.InlineArrow,
-+                            IsCompressed = false
-+                        };
-+
-+                        activity?.SetTag("telemetry.operation_type", operationType.ToString());
-+                        activity?.SetTag("telemetry.statement_type", "METADATA");
-+                    }
-+                }
-+                catch (Exception ex)
-+                {
-+                    // Swallow telemetry errors per design requirement
-+                    activity?.AddEvent(new System.Diagnostics.ActivityEvent("telemetry.context_creation.error",
-+                        tags: new System.Diagnostics.ActivityTagsCollection
-+                        {
-+                            { "error.type", ex.GetType().Name },
-+                            { "error.message", ex.Message }
-+                        }));
-+                }
++                GetObjectsDepth.Catalogs => Telemetry.Proto.Operation.Types.Type.ListCatalogs,
++                GetObjectsDepth.DbSchemas => Telemetry.Proto.Operation.Types.Type.ListSchemas,
++                GetObjectsDepth.Tables => Telemetry.Proto.Operation.Types.Type.ListTables,
++                GetObjectsDepth.All => Telemetry.Proto.Operation.Types.Type.ListColumns,
++                _ => Telemetry.Proto.Operation.Types.Type.Unspecified
++            };
 +
-+                IArrowArrayStream result;
-+                try
-+                {
-+                    // Call base implementation to get the actual results
-+                    result = base.GetObjects(depth, catalogPattern, dbSchemaPattern, tableNamePattern, tableTypes, columnNamePattern);
-+
-+                    // Record success
-+                    if (telemetryContext != null)
-+                    {
-+                        try
-+                        {
-+                            telemetryContext.RecordFirstBatchReady();
-+                        }
-+                        catch
-+                        {
-+                            // Swallow telemetry errors
-+                        }
-+                    }
-+                }
-+                catch (Exception ex)
-+                {
-+                    // Record error in telemetry
-+                    if (telemetryContext != null)
-+                    {
-+                        try
-+                        {
-+                            telemetryContext.HasError = true;
-+                            telemetryContext.ErrorName = ex.GetType().Name;
-+                            telemetryContext.ErrorMessage = ex.Message;
-+                        }
-+                        catch
-+                        {
-+                            // Swallow telemetry errors
-+                        }
-+                    }
-+                    throw;
-+                }
-+                finally
-+                {
-+                    // Emit telemetry
-+                    if (telemetryContext != null)
-+                    {
-+                        try
-+                        {
-+                            telemetryContext.RecordResultsConsumed();
-+                            var telemetryLog = telemetryContext.BuildTelemetryLog();
-+
-+                            var frontendLog = new Telemetry.Models.TelemetryFrontendLog
-+                            {
-+                                WorkspaceId = telemetryContext.WorkspaceId,
-+                                FrontendLogEventId = Guid.NewGuid().ToString(),
-+                                Context = new Telemetry.Models.FrontendLogContext
-+                                {
-+                                    TimestampMillis = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds(),
-+                                },
-+                                Entry = new Telemetry.Models.FrontendLogEntry
-+                                {
-+                                    SqlDriverLog = telemetryLog
-+                                }
-+                            };
-+
-+                            TelemetrySession?.TelemetryClient?.Enqueue(frontendLog);
-+                        }
-+                        catch (Exception ex)
-+                        {
-+                            // Swallow telemetry errors per design requirement
-+                            activity?.AddEvent(new System.Diagnostics.ActivityEvent("telemetry.emit.error",
-+                                tags: new System.Diagnostics.ActivityTagsCollection
-+                                {
-+                                    { "error.type", ex.GetType().Name },
-+                                    { "error.message", ex.Message }
-+                                }));
-+                        }
-+                    }
-+                }
-+
-+                return result;
-+            });
++            return ExecuteWithMetadataTelemetry(
++                operationType,
++                () => base.GetObjects(depth, catalogPattern, dbSchemaPattern, tableNamePattern, tableTypes, columnNamePattern));
 +        }
 +
 +        /// <summary>
@@ -148,9 +41,19 @@
 +        /// </summary>
 +        public override IArrowArrayStream GetTableTypes()
 +        {
++            return ExecuteWithMetadataTelemetry(
++                Telemetry.Proto.Operation.Types.Type.ListTableTypes,
++                () => base.GetTableTypes());
++        }
++
++        /// <summary>
++        /// Executes a metadata operation with telemetry instrumentation.
++        /// Metadata operations don't track batch/consumption timing since results are returned inline.
++        /// </summary>
++        private T ExecuteWithMetadataTelemetry<T>(Telemetry.Proto.Operation.Types.Type operationType, Func<T> operation)
++        {
 +            return this.TraceActivity(activity =>
 +            {
-+                // Create telemetry context for this metadata operation
 +                StatementTelemetryContext? telemetryContext = null;
 +                try
 +                {
@@ -159,18 +62,17 @@
 +                        telemetryContext = new StatementTelemetryContext(TelemetrySession)
 +                        {
 +                            StatementType = Telemetry.Proto.Statement.Types.Type.Metadata,
-+                            OperationType = Telemetry.Proto.Operation.Types.Type.ListTableTypes,
++                            OperationType = operationType,
 +                            ResultFormat = Telemetry.Proto.ExecutionResult.Types.Format.InlineArrow,
 +                            IsCompressed = false
 +                        };
 +
-+                        activity?.SetTag("telemetry.operation_type", "LIST_TABLE_TYPES");
++                        activity?.SetTag("telemetry.operation_type", operationType.ToString());
 +                        activity?.SetTag("telemetry.statement_type", "METADATA");
 +                    }
 +                }
 +                catch (Exception ex)
 +                {
-+                    // Swallow telemetry errors per design requirement
 +                    activity?.AddEvent(new System.Diagnostics.ActivityEvent("telemetry.context_creation.error",
 +                        tags: new System.Diagnostics.ActivityTagsCollection
 +                        {
@@ -179,28 +81,13 @@
 +                        }));
 +                }
 +
-+                IArrowArrayStream result;
++                T result;
 +                try
 +                {
-+                    // Call base implementation to get the actual results
-+                    result = base.GetTableTypes();
-+
-+                    // Record success
-+                    if (telemetryContext != null)
-+                    {
-+                        try
-+                        {
-+                            telemetryContext.RecordFirstBatchReady();
-+                        }
-+                        catch
-+                        {
-+                            // Swallow telemetry errors
-+                        }
-+                    }
++                    result = operation();
 +                }
 +                catch (Exception ex)
 +                {
-+                    // Record error in telemetry
 +                    if (telemetryContext != null)
 +                    {
 +                        try
@@ -218,12 +105,10 @@
 +                }
 +                finally
 +                {
-+                    // Emit telemetry
 +                    if (telemetryContext != null)
 +                    {
 +                        try
 +                        {
-+                            telemetryContext.RecordResultsConsumed();
 +                            var telemetryLog = telemetryContext.BuildTelemetryLog();
 +
 +                            var frontendLog = new Telemetry.Models.TelemetryFrontendLog
@@ -244,7 +129,6 @@
 +                        }
 +                        catch (Exception ex)
 +                        {
-+                            // Swallow telemetry errors per design requirement
 +                            activity?.AddEvent(new System.Diagnostics.ActivityEvent("telemetry.emit.error",
 +                                tags: new System.Diagnostics.ActivityTagsCollection
 +                                {
@@ -274,51 +158,16 @@
                      true, // unauthed failure will be report separately
                      telemetryConfig);
  
-+                // Extract workspace ID from server configuration or connection properties
-+                // Note: workspace_id may be 0 if not available (e.g., for SQL warehouses without orgId in config)
++                // Extract workspace ID from org ID in the HTTP path (e.g., ?o=12345)
 +                long workspaceId = 0;
-+
-+                // Strategy 1: Try to extract from server configuration (for clusters)
-+                if (_openSessionResp?.__isset.configuration == true && _openSessionResp.Configuration != null)
++                string? orgId = PropertyHelper.ParseOrgIdFromProperties(Properties);
++                if (!string.IsNullOrEmpty(orgId) && long.TryParse(orgId, out long parsedOrgId))
 +                {
-+                    if (_openSessionResp.Configuration.TryGetValue("spark.databricks.clusterUsageTags.orgId", out string? orgIdStr))
-+                    {
-+                        if (long.TryParse(orgIdStr, out long parsedOrgId))
-+                        {
-+                            workspaceId = parsedOrgId;
-+                            activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.extracted_from_config",
-+                                tags: new ActivityTagsCollection { { "workspace_id", workspaceId } }));
-+                        }
-+                        else
-+                        {
-+                            activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.parse_failed",
-+                                tags: new ActivityTagsCollection { { "orgId_value", orgIdStr } }));
-+                        }
-+                    }
++                    workspaceId = parsedOrgId;
++                    activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.from_org_id",
++                        tags: new ActivityTagsCollection { { "workspace_id", workspaceId } }));
 +                }
 +
-+                // Strategy 2: Check connection property as fallback
-+                if (workspaceId == 0 && Properties.TryGetValue("adbc.databricks.workspace_id", out string? workspaceIdProp))
-+                {
-+                    if (long.TryParse(workspaceIdProp, out long propWorkspaceId))
-+                    {
-+                        workspaceId = propWorkspaceId;
-+                        activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.from_property",
-+                            tags: new ActivityTagsCollection { { "workspace_id", workspaceId } }));
-+                    }
-+                }
-+
-+                // Log if workspace ID could not be determined
-+                if (workspaceId == 0)
-+                {
-+                    activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.unavailable",
-+                        tags: new ActivityTagsCollection
-+                        {
-+                            { "reason", "Not available in server config or connection properties" },
-+                            { "workaround", "Set adbc.databricks.workspace_id connection property if needed" }
-+                        }));
-+                }
-+
                  // Create session-level telemetry context for V3 direct-object pipeline
                  TelemetrySession = new TelemetrySessionContext
                  {
@@ -350,20 +199,10 @@
                  CharSetEncoding = System.Text.Encoding.Default.WebName,
 -                ProcessName = System.Diagnostics.Process.GetCurrentProcess().ProcessName
 +                ProcessName = processName,
-+                ClientAppName = GetClientAppName(processName)
++                ClientAppName = processName
              };
          }
  
-+        private string GetClientAppName(string processName)
-+        {
-+            // Check connection property first, fall back to process name
-+            Properties.TryGetValue("adbc.databricks.client_app_name", out string? appName);
-+            return appName ?? processName;
-+        }
-+
-         private Telemetry.Proto.DriverConnectionParameters BuildDriverConnectionParams(bool isAuthenticated)
-         {
-             Properties.TryGetValue("adbc.spark.http_path", out string? httpPath);
                  },
                  AuthMech = authMech,
                  AuthFlow = authFlow,
@@ -382,7 +221,7 @@
 +        /// <returns>The batch size value.</returns>
 +        private int GetBatchSize()
 +        {
-+            const int DefaultBatchSize = 50000; // HiveServer2Connection.BatchSizeDefault
++            const int DefaultBatchSize = 2000000; // DatabricksStatement.DatabricksBatchSizeDefault
 +            if (Properties.TryGetValue(ApacheParameters.BatchSize, out string? batchSizeStr) &&
 +                int.TryParse(batchSizeStr, out int batchSize))
 +            {
@@ -408,26 +247,19 @@
 +
 +        /// <summary>
 +        /// Determines the auth_type string based on connection properties.
-+        /// Mapping: PAT -> 'pat', OAuth client_credentials -> 'oauth-m2m', OAuth browser -> 'oauth-u2m', Other -> 'other'
++        /// Format: auth_type or auth_type-grant_type (for OAuth).
++        /// Mapping: PAT -> 'pat', OAuth -> 'oauth-{grant_type}', Other -> 'other'
 +        /// </summary>
 +        /// <returns>The auth_type string value.</returns>
 +        private string DetermineAuthType()
 +        {
-+            // Check for OAuth grant type first
++            // Format: auth_type or auth_type-grant_type (for OAuth)
 +            Properties.TryGetValue(DatabricksParameters.OAuthGrantType, out string? grantType);
 +
 +            if (!string.IsNullOrEmpty(grantType))
 +            {
-+                if (grantType == DatabricksConstants.OAuthGrantTypes.ClientCredentials)
-+                {
-+                    // OAuth M2M (machine-to-machine) - client credentials flow
-+                    return "oauth-m2m";
-+                }
-+                else if (grantType == DatabricksConstants.OAuthGrantTypes.AccessToken)
-+                {
-+                    // OAuth U2M (user-to-machine) - browser-based flow with access token
-+                    return "oauth-u2m";
-+                }
++                // OAuth with grant type: oauth-{grant_type}
++                return $"oauth-{grantType}";
 +            }
 +
 +            // Check for PAT (Personal Access Token)
csharp/src/Telemetry/TelemetrySessionContext.cs
@@ -7,7 +7,7 @@
 +
 +        /// <summary>
 +        /// Gets the authentication type for this connection.
-+        /// Examples: "pat", "oauth-m2m", "oauth-u2m", "other"
++        /// Examples: "pat", "oauth-client_credentials", "oauth-access_token", "other"
 +        /// </summary>
 +        public string? AuthType { get; internal set; }
      }
csharp/test/E2E/Telemetry/AuthTypeTests.cs
@@ -97,10 +97,10 @@
 +        }
 +
 +        /// <summary>
-+        /// Tests that auth_type is set to 'oauth-m2m' when using OAuth client_credentials flow.
++        /// Tests that auth_type is set to 'oauth-client_credentials' when using OAuth client_credentials flow.
 +        /// </summary>
 +        [SkippableFact]
-+        public async Task AuthType_OAuthClientCredentials_SetsToOAuthM2M()
++        public async Task AuthType_OAuthClientCredentials_SetsToOAuthClientCredentials()
 +        {
 +            CapturingTelemetryExporter exporter = null!;
 +            AdbcConnection? connection = null;
@@ -139,9 +139,9 @@
 +
 +                var protoLog = TelemetryTestHelpers.GetProtoLog(logs[0]);
 +
-+                // Assert auth_type is set to "oauth-m2m"
++                // Assert auth_type is set to "oauth-client_credentials"
 +                Assert.NotNull(protoLog);
-+                Assert.Equal("oauth-m2m", protoLog.AuthType);
++                Assert.Equal("oauth-client_credentials", protoLog.AuthType);
 +
 +                OutputHelper?.WriteLine($"✓ auth_type correctly set to: {protoLog.AuthType}");
 +            }
@@ -153,10 +153,10 @@
 +        }
 +
 +        /// <summary>
-+        /// Tests that auth_type is set to 'oauth-u2m' when using OAuth access_token flow.
++        /// Tests that auth_type is set to 'oauth-access_token' when using OAuth access_token flow.
 +        /// </summary>
 +        [SkippableFact]
-+        public async Task AuthType_OAuthAccessToken_SetsToOAuthU2M()
++        public async Task AuthType_OAuthAccessToken_SetsToOAuthAccessToken()
 +        {
 +            CapturingTelemetryExporter exporter = null!;
 +            AdbcConnection? connection = null;
@@ -196,9 +196,9 @@
 +
 +                var protoLog = TelemetryTestHelpers.GetProtoLog(logs[0]);
 +
-+                // Assert auth_type is set to "oauth-u2m"
++                // Assert auth_type is set to "oauth-access_token"
 +                Assert.NotNull(protoLog);
-+                Assert.Equal("oauth-u2m", protoLog.AuthType);
++                Assert.Equal("oauth-access_token", protoLog.AuthType);
 +
 +                OutputHelper?.WriteLine($"✓ auth_type correctly set to: {protoLog.AuthType}");
 +            }
@@ -301,7 +301,7 @@
 +                Assert.False(string.IsNullOrEmpty(protoLog.AuthType), "auth_type should never be null or empty");
 +
 +                // Assert it's one of the expected values
-+                var validAuthTypes = new[] { "pat", "oauth-m2m", "oauth-u2m", "other" };
++                var validAuthTypes = new[] { "pat", "oauth-client_credentials", "oauth-access_token", "other" };
 +                Assert.Contains(protoLog.AuthType, validAuthTypes);
 +
 +                OutputHelper?.WriteLine($"✓ auth_type populated with valid value: {protoLog.AuthType}");
csharp/test/E2E/Telemetry/SystemConfigurationTests.cs
@@ -84,56 +84,10 @@
 +        }
 +
 +        /// <summary>
-+        /// Tests that client_app_name is populated from connection property when provided.
-+        /// </summary>
-+        [SkippableFact]
-+        public async Task SystemConfig_ClientAppName_FromConnectionProperty()
-+        {
-+            CapturingTelemetryExporter exporter = null!;
-+            AdbcConnection? connection = null;
-+
-+            try
-+            {
-+                var properties = TestEnvironment.GetDriverParameters(TestConfiguration);
-+
-+                // Set custom client app name via connection property
-+                string customAppName = "MyCustomApp-E2ETest";
-+                properties["adbc.databricks.client_app_name"] = customAppName;
-+
-+                (connection, exporter) = TelemetryTestHelpers.CreateConnectionWithCapturingTelemetry(properties);
-+
-+                // Execute a simple query to trigger telemetry
-+                using var statement = connection.CreateStatement();
-+                statement.SqlQuery = "SELECT 1 AS test_value";
-+                var result = statement.ExecuteQuery();
-+                using var reader = result.Stream;
-+
-+                statement.Dispose();
-+
-+                // Wait for telemetry to be captured
-+                var logs = await TelemetryTestHelpers.WaitForTelemetryEvents(exporter, expectedCount: 1);
-+                TelemetryTestHelpers.AssertLogCount(logs, 1);
-+
-+                var protoLog = TelemetryTestHelpers.GetProtoLog(logs[0]);
-+
-+                // Assert client_app_name matches the custom value from connection property
-+                Assert.NotNull(protoLog.SystemConfiguration);
-+                Assert.Equal(customAppName, protoLog.SystemConfiguration.ClientAppName);
-+
-+                OutputHelper?.WriteLine($"✓ client_app_name from property: {protoLog.SystemConfiguration.ClientAppName}");
-+            }
-+            finally
-+            {
-+                connection?.Dispose();
-+                TelemetryTestHelpers.ClearExporterOverride();
-+            }
-+        }
-+
-+        /// <summary>
-+        /// Tests that client_app_name defaults to process name when connection property is not provided.
++        /// Tests that client_app_name is always set to the process name.
 +        /// </summary>
 +        [SkippableFact]
-+        public async Task SystemConfig_ClientAppName_DefaultsToProcessName()
++        public async Task SystemConfig_ClientAppName_IsProcessName()
 +        {
 +            CapturingTelemetryExporter exporter = null!;
 +            AdbcConnection? connection = null;
@@ -141,9 +95,6 @@
 +            try
 +            {
 +                var properties = TestEnvironment.GetDriverParameters(TestConfiguration);
-+
-+                // DO NOT set client_app_name property - should default to process name
-+                properties.Remove("adbc.databricks.client_app_name");
 +
 +                (connection, exporter) = TelemetryTestHelpers.CreateConnectionWithCapturingTelemetry(properties);
 +

Reproduce locally: git range-diff 425a554..d504b0a 425a554..6a928e7 | Disable: git config gitstack.push-range-diff false

@jadewang-db
Copy link
Collaborator Author

Range-diff: stack/fix-telemetry-gaps-design (6a928e7 -> 490d943)
.gitignore
@@ -1,9 +0,0 @@
-diff --git a/.gitignore b/.gitignore
---- a/.gitignore
-+++ b/.gitignore
- 
- # Git worktrees
- .worktrees/
-+
-+# Demo directory (local only)
-+demo/
\ No newline at end of file
docs/designs/fix-telemetry-gaps-design.md
@@ -1,696 +0,0 @@
-diff --git a/docs/designs/fix-telemetry-gaps-design.md b/docs/designs/fix-telemetry-gaps-design.md
-deleted file mode 100644
---- a/docs/designs/fix-telemetry-gaps-design.md
-+++ /dev/null
--# Fix Telemetry Gaps - Design Document
--
--## Objective
--
--Ensure the ADBC C# driver reports **all** proto-defined telemetry fields to the Databricks backend, matching the JDBC driver's coverage. Close gaps in field population, expand coverage to metadata operations, and add E2E tests verifying every proto field.
--
-----
--
--## Current State
--
--The driver has a working telemetry pipeline:
--
--```mermaid
--sequenceDiagram
--    participant Stmt as DatabricksStatement
--    participant Ctx as StatementTelemetryContext
--    participant Client as TelemetryClient
--    participant Exporter as DatabricksTelemetryExporter
--    participant Backend as Databricks Backend
--
--    Stmt->>Ctx: CreateTelemetryContext()
--    Stmt->>Stmt: Execute query/update
--    Stmt->>Ctx: RecordSuccess / RecordError
--    Stmt->>Ctx: BuildTelemetryLog()
--    Ctx-->>Stmt: OssSqlDriverTelemetryLog
--    Stmt->>Client: Enqueue(frontendLog)
--    Client->>Exporter: ExportAsync(batch)
--    Exporter->>Backend: POST /telemetry-ext
--```
--
--However, a gap analysis against the proto schema reveals **multiple fields that are not populated or not covered**.
--
--### Two Connection Protocols
--
--The driver supports two protocols selected via `adbc.databricks.protocol`:
--
--```mermaid
--flowchart TD
--    DB[DatabricksDatabase.Connect] -->|protocol=thrift| Thrift[DatabricksConnection]
--    DB -->|protocol=rest| SEA[StatementExecutionConnection]
--    Thrift --> ThriftStmt[DatabricksStatement]
--    SEA --> SEAStmt[StatementExecutionStatement]
--    ThriftStmt --> TC[TelemetryClient]
--    SEAStmt -.->|NOT WIRED| TC
--```
--
--| Aspect | Thrift (DatabricksConnection) | SEA (StatementExecutionConnection) |
--|---|---|---|
--| Base class | SparkHttpConnection | TracingConnection |
--| Session creation | `OpenSessionWithInitialNamespace()` Thrift RPC | `CreateSessionAsync()` REST API |
--| Result format | Inline Arrow batches via Thrift | ARROW_STREAM (configurable disposition) |
--| CloudFetch | `ThriftResultFetcher` via `FetchResults()` | `StatementExecutionResultFetcher` via `GetResultChunkAsync()` |
--| Catalog discovery | Returned in OpenSessionResp | Explicit `SELECT CURRENT_CATALOG()` |
--| Telemetry | Fully wired | **ZERO telemetry** |
--
--**Critical gap: `StatementExecutionConnection` does not create a `TelemetrySessionContext`, does not initialize a `TelemetryClient`, and `StatementExecutionStatement` does not emit any telemetry events.**
--
-----
--
--## Gap Analysis
--
--### Gap 0: SEA Connection Has No Telemetry
--
--`StatementExecutionConnection` is a completely separate class from `DatabricksConnection`. It has:
--- No `InitializeTelemetry()` call
--- No `TelemetrySessionContext` creation
--- No `TelemetryClient` initialization
--- `StatementExecutionStatement` has no telemetry context creation or `EmitTelemetry()` calls
--- `DriverMode` is hardcoded to `THRIFT` in `DatabricksConnection.BuildDriverConnectionParams()` - there is no code path that ever sets `SEA`
--
--### Proto Field Coverage Matrix (Thrift only)
--
--#### OssSqlDriverTelemetryLog (root)
--
--| Proto Field | Status | Gap Description |
--|---|---|---|
--| `session_id` | Populated | Set from SessionHandle |
--| `sql_statement_id` | Populated | Set from StatementId |
--| `system_configuration` | Partial | Missing `runtime_vendor`, `client_app_name` |
--| `driver_connection_params` | Partial | Only 5 of 47 fields populated |
--| `auth_type` | **NOT SET** | String field never populated |
--| `vol_operation` | **NOT SET** | Volume operations not instrumented |
--| `sql_operation` | Populated | Most sub-fields covered |
--| `error_info` | Populated | `stack_trace` intentionally empty |
--| `operation_latency_ms` | Populated | From stopwatch |
--
--#### DriverSystemConfiguration (12 fields)
--
--| Proto Field | Status | Notes |
--|---|---|---|
--| `driver_version` | Populated | Assembly version |
--| `runtime_name` | Populated | FrameworkDescription |
--| `runtime_version` | Populated | Environment.Version |
--| `runtime_vendor` | **NOT SET** | Should be "Microsoft" for .NET |
--| `os_name` | Populated | OSVersion.Platform |
--| `os_version` | Populated | OSVersion.Version |
--| `os_arch` | Populated | RuntimeInformation.OSArchitecture |
--| `driver_name` | Populated | "Databricks ADBC Driver" |
--| `client_app_name` | **NOT SET** | Should come from connection property or user-agent |
--| `locale_name` | Populated | CultureInfo.CurrentCulture |
--| `char_set_encoding` | Populated | Encoding.Default.WebName |
--| `process_name` | Populated | Process name |
--
--#### DriverConnectionParameters (47 fields)
--
--| Proto Field | Status | Notes |
--|---|---|---|
--| `http_path` | Populated | |
--| `mode` | Populated | Hardcoded to THRIFT |
--| `host_info` | Populated | |
--| `auth_mech` | Populated | PAT or OAUTH |
--| `auth_flow` | Populated | TOKEN_PASSTHROUGH or CLIENT_CREDENTIALS |
--| `use_proxy` | **NOT SET** | |
--| `auth_scope` | **NOT SET** | |
--| `use_system_proxy` | **NOT SET** | |
--| `rows_fetched_per_block` | **NOT SET** | Available from batch size config |
--| `socket_timeout` | **NOT SET** | Available from connection properties |
--| `enable_arrow` | **NOT SET** | Always true for this driver |
--| `enable_direct_results` | **NOT SET** | Available from connection config |
--| `auto_commit` | **NOT SET** | Available from connection properties |
--| `enable_complex_datatype_support` | **NOT SET** | Available from connection properties |
--| Other 28 fields | **NOT SET** | Many are Java/JDBC-specific, N/A for C# |
--
--#### SqlExecutionEvent (9 fields)
--
--| Proto Field | Status | Notes |
--|---|---|---|
--| `statement_type` | Populated | QUERY or UPDATE |
--| `is_compressed` | Populated | From LZ4 flag |
--| `execution_result` | Populated | INLINE_ARROW or EXTERNAL_LINKS |
--| `chunk_id` | Not applicable | For individual chunk failure events |
--| `retry_count` | **NOT SET** | Should track retries |
--| `chunk_details` | **NOT WIRED** | `SetChunkDetails()` exists but is never called (see below) |
--| `result_latency` | Populated | First batch + consumption |
--| `operation_detail` | Partial | `is_internal_call` hardcoded false |
--| `java_uses_patched_arrow` | Not applicable | Java-specific |
--
--#### ChunkDetails (5 fields) - NOT WIRED
--
--`StatementTelemetryContext.SetChunkDetails()` is defined but **never called anywhere** in the codebase. The CloudFetch pipeline tracks per-chunk timing in `Activity` events (OpenTelemetry traces) but does not bridge the data back to the telemetry proto.
--
--| Proto Field | Status | Notes |
--|---|---|---|
--| `initial_chunk_latency_millis` | **NOT WIRED** | Tracked in CloudFetchDownloader Activity events but not passed to telemetry context |
--| `slowest_chunk_latency_millis` | **NOT WIRED** | Same - tracked per-file but not aggregated to context |
--| `total_chunks_present` | **NOT WIRED** | Available from result link count |
--| `total_chunks_iterated` | **NOT WIRED** | Available from CloudFetchReader iteration count |
--| `sum_chunks_download_time_millis` | **NOT WIRED** | Tracked as `total_time_ms` in downloader summary but not passed to context |
--
--**Current data flow (broken):**
--```mermaid
--flowchart LR
--    DL[CloudFetchDownloader] -->|per-chunk Stopwatch| Act[Activity Traces]
--    DL -.->|MISSING| Ctx[StatementTelemetryContext]
--    Ctx -->|BuildTelemetryLog| Proto[ChunkDetails proto]
--```
--
--#### OperationDetail (4 fields)
--
--| Proto Field | Status | Notes |
--|---|---|---|
--| `n_operation_status_calls` | Populated | Poll count |
--| `operation_status_latency_millis` | Populated | Poll latency |
--| `operation_type` | Partial | Only EXECUTE_STATEMENT; missing metadata ops |
--| `is_internal_call` | **Hardcoded false** | Should be true for internal queries (e.g., USE SCHEMA) |
--
--#### WorkspaceId in TelemetrySessionContext
--
--| Field | Status | Notes |
--|---|---|---|
--| `WorkspaceId` | **NOT SET** | Declared in TelemetrySessionContext but never populated during InitializeTelemetry() |
--
-----
--
--## Proposed Changes
--
--### 0. Wire Telemetry into StatementExecutionConnection (SEA)
--
--This is the highest-priority gap. SEA connections have zero telemetry coverage.
--
--#### Alternatives Considered: Abstract Base Class vs Composition
--
--**Option A: Abstract base class between Thrift and SEA (not feasible)**
--
--The two protocols have deeply divergent inheritance chains:
--
--```
--Thrift Connection: TracingConnection → HiveServer2Connection → SparkConnection → SparkHttpConnection → DatabricksConnection
--SEA Connection:    TracingConnection → StatementExecutionConnection
--
--Thrift Statement:  TracingStatement → HiveServer2Statement → SparkStatement → DatabricksStatement
--SEA Statement:     TracingStatement → StatementExecutionStatement
--```
--
--C# single inheritance prevents inserting a shared `DatabricksTelemetryConnection` between `TracingConnection` and both leaf classes without also inserting it between 4 intermediate Thrift layers. Additionally:
--- DatabricksStatement implements `IHiveServer2Statement`; SEA doesn't
--- Thrift execution inherits complex protocol/transport logic; SEA uses a REST client
--- The Thrift chain lives in a separate `hiveserver2` project with its own assembly
--
--**Option B: Shared interface with default methods (C# 8+)**
--
--Could define `ITelemetryConnection` with default method implementations, but:
--- Default interface methods can't access private/protected state
--- Would still need duplicated field declarations in each class
--- Awkward pattern for C# compared to Java
--
--**Option C: Composition via TelemetryHelper (chosen)**
--
--Extract shared telemetry logic into a static helper class. Both connection types call the same helper, each wiring it into their own lifecycle. This:
--- Requires no changes to either inheritance chain
--- Keeps all telemetry logic in one place (single source of truth)
--- Is the standard C# pattern for sharing behavior across unrelated class hierarchies
--- Doesn't affect the `hiveserver2` project at all
--
--**Approach:** Extract shared telemetry logic so both connection types can reuse it.
--
--```mermaid
--classDiagram
--    class TelemetryHelper {
--        +InitializeTelemetry(properties, host, sessionId) TelemetrySessionContext
--        +BuildSystemConfiguration() DriverSystemConfiguration
--        +BuildDriverConnectionParams(properties, host, mode) DriverConnectionParameters
--    }
--    class DatabricksConnection {
--        -TelemetrySession TelemetrySessionContext
--        +InitializeTelemetry()
--    }
--    class StatementExecutionConnection {
--        -TelemetrySession TelemetrySessionContext
--        +InitializeTelemetry()
--    }
--    class DatabricksStatement {
--        +EmitTelemetry()
--    }
--    class StatementExecutionStatement {
--        +EmitTelemetry()
--    }
--    DatabricksConnection --> TelemetryHelper : uses
--    StatementExecutionConnection --> TelemetryHelper : uses
--    DatabricksStatement --> TelemetryHelper : uses
--    StatementExecutionStatement --> TelemetryHelper : uses
--```
--
--**Changes required:**
--
--#### a. Extract `TelemetryHelper` (new static/internal class)
--
--Move `BuildSystemConfiguration()` and `BuildDriverConnectionParams()` out of `DatabricksConnection` into a shared helper so both connection types can call it.
--
--```csharp
--internal static class TelemetryHelper
--{
--    // Shared system config builder (OS, runtime, driver version)
--    public static DriverSystemConfiguration BuildSystemConfiguration(
--        string driverVersion);
--
--    // Shared connection params builder - accepts mode parameter
--    public static DriverConnectionParameters BuildDriverConnectionParams(
--        IReadOnlyDictionary<string, string> properties,
--        string host,
--        DriverMode.Types.Type mode);
--
--    // Shared telemetry initialization
--    public static TelemetrySessionContext InitializeTelemetry(
--        IReadOnlyDictionary<string, string> properties,
--        string host,
--        string sessionId,
--        DriverMode.Types.Type mode,
--        string driverVersion);
--}
--```
--
--#### b. Add telemetry to `StatementExecutionConnection`
--
--**File:** `StatementExecution/StatementExecutionConnection.cs`
--
--- Call `TelemetryHelper.InitializeTelemetry()` after `CreateSessionAsync()` succeeds
--- Set `mode = DriverMode.Types.Type.Sea`
--- Store `TelemetrySessionContext` on the connection
--- Release telemetry client on dispose (matching DatabricksConnection pattern)
--
--#### c. Add telemetry to `StatementExecutionStatement`
--
--**File:** `StatementExecution/StatementExecutionStatement.cs`
--
--The statement-level telemetry methods (`CreateTelemetryContext()`, `RecordSuccess()`, `RecordError()`, `EmitTelemetry()`) follow the same pattern for both Thrift and SEA. Move these into `TelemetryHelper` as well:
--
--```csharp
--internal static class TelemetryHelper
--{
--    // ... connection-level methods from above ...
--
--    // Shared statement telemetry methods
--    public static StatementTelemetryContext? CreateTelemetryContext(
--        TelemetrySessionContext? session,
--        Statement.Types.Type statementType,
--        Operation.Types.Type operationType,
--        bool isCompressed);
--
--    public static void RecordSuccess(
--        StatementTelemetryContext ctx,
--        string? statementId,
--        ExecutionResult.Types.Format resultFormat);
--
--    public static void RecordError(
--        StatementTelemetryContext ctx,
--        Exception ex);
--
--    public static void EmitTelemetry(
--        StatementTelemetryContext ctx,
--        TelemetrySessionContext? session);
--}
--```
--
--Both `DatabricksStatement` and `StatementExecutionStatement` call these shared methods, each providing their own protocol-specific values (e.g., result format, operation type).
--
--#### d. SEA-specific field mapping
--
--| Proto Field | SEA Value |
--|---|---|
--| `driver_connection_params.mode` | `DriverMode.Types.Type.Sea` |
--| `execution_result` | Map from SEA result disposition (INLINE_OR_EXTERNAL_LINKS -> EXTERNAL_LINKS or INLINE_ARROW) |
--| `operation_detail.operation_type` | EXECUTE_STATEMENT_ASYNC (SEA is always async) |
--| `chunk_details` | From `StatementExecutionResultFetcher` chunk metrics |
--
--### 1. Populate Missing System Configuration Fields
--
--**File:** `DatabricksConnection.cs` - `BuildSystemConfiguration()`
--
--```csharp
--// Add to BuildSystemConfiguration()
--RuntimeVendor = "Microsoft",  // .NET runtime vendor
--ClientAppName = GetClientAppName(),  // From connection property or user-agent
--```
--
--**Interface:**
--```csharp
--private string GetClientAppName()
--{
--    // Check connection property first, fall back to process name
--    Properties.TryGetValue("adbc.databricks.client_app_name", out string? appName);
--    return appName ?? Process.GetCurrentProcess().ProcessName;
--}
--```
--
--### 2. Populate auth_type on Root Log
--
--**File:** `StatementTelemetryContext.cs` - `BuildTelemetryLog()`
--
--Add `auth_type` string field to TelemetrySessionContext and set it during connection initialization based on the authentication method used.
--
--```csharp
--// In BuildTelemetryLog()
--log.AuthType = _sessionContext.AuthType ?? string.Empty;
--```
--
--**Mapping:**
--| Auth Config | auth_type String |
--|---|---|
--| PAT | `"pat"` |
--| OAuth client_credentials | `"oauth-m2m"` |
--| OAuth browser | `"oauth-u2m"` |
--| Other | `"other"` |
--
--### 3. Populate WorkspaceId
--
--**File:** `DatabricksConnection.cs` - `InitializeTelemetry()`
--
--Extract workspace ID from server response or connection properties. The workspace ID is available from the HTTP path (e.g., `/sql/1.0/warehouses/<id>` doesn't contain it directly, but server configuration responses may include it).
--
--```csharp
--// Parse workspace ID from server configuration or properties
--TelemetrySession.WorkspaceId = ExtractWorkspaceId();
--```
--
--### 4. Expand DriverConnectionParameters Population
--
--**File:** `DatabricksConnection.cs` - `BuildDriverConnectionParams()`
--
--Add applicable connection parameters:
--
--```csharp
--return new DriverConnectionParameters
--{
--    HttpPath = httpPath ?? "",
--    Mode = DriverMode.Types.Type.Thrift,
--    HostInfo = new HostDetails { ... },
--    AuthMech = authMech,
--    AuthFlow = authFlow,
--    // NEW fields:
--    EnableArrow = true,  // Always true for ADBC driver
--    RowsFetchedPerBlock = GetBatchSize(),
--    SocketTimeout = GetSocketTimeout(),
--    EnableDirectResults = true,
--    EnableComplexDatatypeSupport = GetComplexTypeSupport(),
--    AutoCommit = GetAutoCommit(),
--};
--```
--
--### 5. Add Metadata Operation Telemetry
--
--Currently only `ExecuteQuery()` and `ExecuteUpdate()` emit telemetry. Metadata operations (GetObjects, GetTableTypes, GetInfo, etc.) are not instrumented.
--
--**Approach:** Override metadata methods in `DatabricksConnection` to emit telemetry with appropriate `OperationType` and `StatementType = METADATA`.
--
--```mermaid
--classDiagram
--    class DatabricksConnection {
--        +GetObjects() QueryResult
--        +GetTableTypes() QueryResult
--        +GetInfo() QueryResult
--    }
--    class StatementTelemetryContext {
--        +OperationType OperationTypeEnum
--        +StatementType METADATA
--    }
--    DatabricksConnection --> StatementTelemetryContext : creates for metadata ops
--```
--
--**Operation type mapping:**
--
--| ADBC Method | Operation.Type |
--|---|---|
--| GetObjects (depth=Catalogs) | LIST_CATALOGS |
--| GetObjects (depth=Schemas) | LIST_SCHEMAS |
--| GetObjects (depth=Tables) | LIST_TABLES |
--| GetObjects (depth=Columns) | LIST_COLUMNS |
--| GetTableTypes | LIST_TABLE_TYPES |
--
--### 6. Track Internal Calls
--
--**File:** `DatabricksStatement.cs`
--
--Mark internal calls like `USE SCHEMA` (from `SetSchema()` in DatabricksConnection) with `is_internal_call = true`.
--
--**Approach:** Add an internal property to StatementTelemetryContext:
--```csharp
--public bool IsInternalCall { get; set; }
--```
--
--Set it when creating telemetry context for internal operations.
--
--### 7. Wire ChunkDetails from CloudFetch to Telemetry
--
--`SetChunkDetails()` exists on `StatementTelemetryContext` but is never called. The CloudFetch pipeline already tracks per-chunk timing via `Stopwatch` in `CloudFetchDownloader` but only exports it to Activity traces.
--
--**Approach:** Aggregate chunk metrics in the CloudFetch reader and pass them to the telemetry context before telemetry is emitted.
--
--```mermaid
--sequenceDiagram
--    participant Stmt as DatabricksStatement
--    participant Reader as CloudFetchReader
--    participant DL as CloudFetchDownloader
--    participant Ctx as StatementTelemetryContext
--
--    Stmt->>Reader: Read all batches
--    DL->>DL: Track per-chunk Stopwatch
--    Reader->>Reader: Aggregate chunk stats
--    Stmt->>Reader: GetChunkMetrics()
--    Reader-->>Stmt: ChunkMetrics
--    Stmt->>Ctx: SetChunkDetails(metrics)
--    Stmt->>Ctx: BuildTelemetryLog()
--```
--
--**Changes required:**
--
--#### a. Add `ChunkMetrics` data class
--
--```csharp
--internal sealed class ChunkMetrics
--{
--    public int TotalChunksPresent { get; set; }
--    public int TotalChunksIterated { get; set; }
--    public long InitialChunkLatencyMs { get; set; }
--    public long SlowestChunkLatencyMs { get; set; }
--    public long SumChunksDownloadTimeMs { get; set; }
--}
--```
--
--#### b. Track metrics in `CloudFetchDownloader`
--
--The downloader already has per-file `Stopwatch` timing. Add aggregation fields:
--- Record latency of first completed chunk -> `InitialChunkLatencyMs`
--- Track max latency across all chunks -> `SlowestChunkLatencyMs`
--- Sum all chunk latencies -> `SumChunksDownloadTimeMs`
--
--Expose via `GetChunkMetrics()` method.
--
--#### c. Bridge in `CloudFetchReader` / `DatabricksCompositeReader`
--
--- `CloudFetchReader` already tracks `_totalBytesDownloaded` - add a method to retrieve aggregated chunk metrics from its downloader
--- Expose `GetChunkMetrics()` on the reader interface
--
--#### d. Call `SetChunkDetails()` in `DatabricksStatement.EmitTelemetry()`
--
--Before building the telemetry log, check if the result reader is a CloudFetch reader and pull chunk metrics:
--
--```csharp
--// In EmitTelemetry() or RecordSuccess()
--if (reader is CloudFetchReader cfReader)
--{
--    var metrics = cfReader.GetChunkMetrics();
--    ctx.SetChunkDetails(
--        metrics.TotalChunksPresent,
--        metrics.TotalChunksIterated,
--        metrics.InitialChunkLatencyMs,
--        metrics.SlowestChunkLatencyMs,
--        metrics.SumChunksDownloadTimeMs);
--}
--```
--
--**Applies to both Thrift and SEA** since both use `CloudFetchDownloader` under the hood.
--
--### 8. Track Retry Count
--
--**File:** `StatementTelemetryContext.cs`
--
--Add retry count tracking. The retry count is available from the HTTP retry handler.
--
--```csharp
--public int RetryCount { get; set; }
--
--// In BuildTelemetryLog():
--sqlEvent.RetryCount = RetryCount;
--```
--
-----
--
--## E2E Test Strategy
--
--### Test Infrastructure
--
--Use `CapturingTelemetryExporter` to intercept telemetry events and validate proto field values without requiring backend connectivity.
--
--```mermaid
--sequenceDiagram
--    participant Test as E2E Test
--    participant Conn as DatabricksConnection
--    participant Stmt as DatabricksStatement
--    participant Capture as CapturingTelemetryExporter
--
--    Test->>Conn: Connect with CapturingExporter
--    Test->>Stmt: ExecuteQuery("SELECT 1")
--    Stmt->>Capture: Enqueue(telemetryLog)
--    Test->>Capture: Assert all proto fields
--```
--
--### Test Cases
--
--#### System Configuration Tests
--- `Telemetry_SystemConfig_AllFieldsPopulated` - Verify all 12 DriverSystemConfiguration fields are non-empty
--- `Telemetry_SystemConfig_RuntimeVendor_IsMicrosoft` - Verify runtime_vendor is set
--- `Telemetry_SystemConfig_ClientAppName_IsPopulated` - Verify client_app_name from property or default
--
--#### Connection Parameters Tests
--- `Telemetry_ConnectionParams_BasicFields` - Verify http_path, mode, host_info, auth_mech, auth_flow
--- `Telemetry_ConnectionParams_ExtendedFields` - Verify enable_arrow, rows_fetched_per_block, socket_timeout
--- `Telemetry_ConnectionParams_Mode_IsThrift` - Verify mode=THRIFT for Thrift connections
--
--#### Root Log Tests
--- `Telemetry_RootLog_AuthType_IsPopulated` - Verify auth_type string matches auth config
--- `Telemetry_RootLog_WorkspaceId_IsSet` - Verify workspace_id is non-zero
--- `Telemetry_RootLog_SessionId_MatchesConnection` - Verify session_id matches
--
--#### SQL Execution Tests
--- `Telemetry_Query_AllSqlEventFields` - Full field validation for SELECT query
--- `Telemetry_Update_StatementType_IsUpdate` - Verify UPDATE statement type
--- `Telemetry_Query_OperationLatency_IsPositive` - Verify timing is captured
--- `Telemetry_Query_ResultLatency_FirstBatchAndConsumption` - Verify both latency fields
--
--#### Operation Detail Tests
--- `Telemetry_OperationDetail_PollCount_IsTracked` - Verify n_operation_status_calls
--- `Telemetry_OperationDetail_OperationType_IsExecuteStatement` - Verify operation type
--- `Telemetry_InternalCall_IsMarkedAsInternal` - Verify is_internal_call for USE SCHEMA
--
--#### CloudFetch Chunk Details Tests
--- `Telemetry_CloudFetch_ChunkDetails_AllFieldsPopulated` - Verify all 5 ChunkDetails fields are non-zero
--- `Telemetry_CloudFetch_InitialChunkLatency_IsPositive` - Verify initial_chunk_latency_millis > 0
--- `Telemetry_CloudFetch_SlowestChunkLatency_GteInitial` - Verify slowest >= initial
--- `Telemetry_CloudFetch_SumDownloadTime_GteSlowest` - Verify sum >= slowest
--- `Telemetry_CloudFetch_TotalChunksIterated_LtePresent` - Verify iterated <= present
--- `Telemetry_CloudFetch_ExecutionResult_IsExternalLinks` - Verify result format
--- `Telemetry_InlineResults_NoChunkDetails` - Verify chunk_details is null for inline results
--
--#### Error Handling Tests
--- `Telemetry_Error_CapturesErrorName` - Verify error_name from exception type
--- `Telemetry_Error_NoStackTrace` - Verify stack_trace is empty (privacy)
--
--#### Metadata Operation Tests
--- `Telemetry_GetObjects_EmitsTelemetry` - Verify telemetry for GetObjects
--- `Telemetry_GetTableTypes_EmitsTelemetry` - Verify telemetry for GetTableTypes
--- `Telemetry_Metadata_OperationType_IsCorrect` - Verify LIST_CATALOGS, LIST_TABLES, etc.
--- `Telemetry_Metadata_StatementType_IsMetadata` - Verify statement_type=METADATA
--
--#### SEA (Statement Execution) Connection Tests
--- `Telemetry_SEA_EmitsTelemetryOnQuery` - Verify SEA connections emit telemetry at all
--- `Telemetry_SEA_Mode_IsSea` - Verify mode=SEA in connection params
--- `Telemetry_SEA_SessionId_IsPopulated` - Verify session_id from REST session
--- `Telemetry_SEA_OperationType_IsExecuteStatementAsync` - SEA is always async
--- `Telemetry_SEA_CloudFetch_ChunkDetails` - Verify chunk metrics from SEA fetcher
--- `Telemetry_SEA_ExecutionResult_MatchesDisposition` - Verify result format mapping
--- `Telemetry_SEA_SystemConfig_MatchesThrift` - Same OS/runtime info regardless of protocol
--- `Telemetry_SEA_ConnectionDispose_FlushesAll` - Verify cleanup on SEA connection close
--- `Telemetry_SEA_Error_CapturesErrorName` - Error handling works for SEA
--
--#### Connection Lifecycle Tests
--- `Telemetry_MultipleStatements_EachEmitsSeparateLog` - Verify per-statement telemetry
--- `Telemetry_ConnectionDispose_FlushesAllPending` - Verify flush on close
--
-----
--
--## Fields Intentionally Not Populated
--
--The following proto fields are **not applicable** to the C# ADBC driver and will be left unset:
--
--| Field | Reason |
--|---|---|
--| `java_uses_patched_arrow` | Java-specific |
--| `vol_operation` (all fields) | UC Volume operations not supported in ADBC |
--| `google_service_account` | GCP-specific, not applicable |
--| `google_credential_file_path` | GCP-specific, not applicable |
--| `ssl_trust_store_type` | Java keystore concept |
--| `jwt_key_file`, `jwt_algorithm` | Not supported in C# driver |
--| `discovery_mode_enabled`, `discovery_url` | Not implemented |
--| `azure_workspace_resource_id`, `azure_tenant_id` | Azure-specific, may add later |
--| `enable_sea_hybrid_results` | Not configurable in C# driver |
--| `non_proxy_hosts`, proxy fields | Proxy not implemented |
--| `chunk_id` | Per-chunk failure events, not per-statement |
--
-----
--
--## Implementation Priority
--
--### Phase 1: Thrift Telemetry Gaps (Missing Fields, ChunkDetails, Behavioral Changes)
--
--Fix all gaps in the existing Thrift telemetry pipeline first, since the infrastructure is already in place.
--
--**E2E Tests (test-first):**
--1. Build E2E test infrastructure using `CapturingTelemetryExporter` to assert proto field values
--2. Write E2E tests for all currently populated proto fields (Thrift) - establish the baseline
--3. Write failing E2E tests for missing fields (auth_type, WorkspaceId, runtime_vendor, client_app_name, etc.)
--4. Write failing E2E tests for ChunkDetails fields
--5. Write failing E2E tests for metadata operations and internal call tracking
--
--**Implementation:**
--6. Populate `runtime_vendor` and `client_app_name` in DriverSystemConfiguration
--7. Populate `auth_type` on root log
--8. Populate additional DriverConnectionParameters (enable_arrow, rows_fetched_per_block, etc.)
--9. Set `WorkspaceId` in TelemetrySessionContext
--10. Add `ChunkMetrics` aggregation to `CloudFetchDownloader`
--11. Expose metrics via `CloudFetchReader.GetChunkMetrics()`
--12. Call `SetChunkDetails()` in `DatabricksStatement.EmitTelemetry()`
--13. Track `retry_count` on SqlExecutionEvent
--14. Mark internal calls with `is_internal_call = true`
--15. Add metadata operation telemetry (GetObjects, GetTableTypes)
--16. Verify all Phase 1 E2E tests pass
--
--### Phase 2: SEA Telemetry (Wire Telemetry into StatementExecutionConnection)
--
--Once Thrift telemetry is complete, extend coverage to the SEA protocol using the shared `TelemetryHelper`.
--
--**E2E Tests (test-first):**
--17. Write failing E2E tests for SEA telemetry (expect telemetry events from SEA connections)
--
--**Implementation:**
--18. Extract `TelemetryHelper` from `DatabricksConnection` for shared use (already done - verify coverage)
--19. Wire `InitializeTelemetry()` into `StatementExecutionConnection` with `mode=SEA`
--20. Add `EmitTelemetry()` to `StatementExecutionStatement`
--21. Wire telemetry dispose/flush into `StatementExecutionConnection.Dispose()`
--22. Wire `SetChunkDetails()` in `StatementExecutionStatement.EmitTelemetry()` for SEA CloudFetch
--23. Verify all Phase 2 SEA E2E tests pass
--
-----
--
--## Configuration
--
--No new configuration parameters are needed. All changes use existing connection properties and runtime information.
--
-----
--
--## Error Handling
--
--All telemetry changes follow the existing design principle: **telemetry must never impact driver operations**. All new code paths are wrapped in try-catch blocks that silently swallow exceptions.
--
-----
--
--## Concurrency
--
--No new concurrency concerns. All changes follow existing patterns:
--- `TelemetrySessionContext` is created once per connection (single-threaded)
--- `StatementTelemetryContext` is created once per statement execution (single-threaded within statement)
--- `TelemetryClient.Enqueue()` is already thread-safe
\ No newline at end of file

Reproduce locally: git range-diff 425a554..6a928e7 425a554..490d943 | Disable: git config gitstack.push-range-diff false

private bool enableComplexDatatypeSupport;
private Dictionary<string, string>? confOverlay;
internal string? StatementId { get; set; }
private QueryResult? _lastQueryResult; // Track last query result for telemetry chunk metrics
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong, for metadata calls like GetTablesAsync/GetColumnsAsync they route to HiveServer2Connection, which directly call Thrift metadata: https://github.com/adbc-drivers/hiveserver2/blob/825f9b96b810c9fbbcfa452392b1f4f903547568/csharp/src/Hive2/HiveServer2Connection.cs#L1049, so it does not go through the executeQuery path in this file

}

// Strategy 2: Check connection property as fallback
if (workspaceId == 0 && Properties.TryGetValue("adbc.databricks.workspace_id", out string? workspaceIdProp))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this look up still exist? adbc.databricks.workspace_id

}

// Strategy 2: Check connection property as fallback
if (workspaceId == 0 && Properties.TryGetValue("adbc.databricks.workspace_id", out string? workspaceIdProp))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extract org ID from Http path would only work for SPOG url where there is org id, not for current legacy urls.
I think best chance is still to extract the orgId from opensession response http header.

private string GetClientAppName(string processName)
{
// Check connection property first, fall back to process name
Properties.TryGetValue("adbc.databricks.client_app_name", out string? appName);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not seeing adbc.databricks.client_app_name as a valid databricks parameter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this method is not needed anymore, it should be removed.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Already removed — GetClientAppName method and adbc.databricks.client_app_name property lookup were removed in the previous round. ClientAppName is now set directly to processName in BuildSystemConfiguration().


This comment was generated with GitHub MCP.

private string GetClientAppName(string processName)
{
// Check connection property first, fall back to process name
Properties.TryGetValue("adbc.databricks.client_app_name", out string? appName);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this method is not needed anymore, it should be removed.

}

// Strategy 2: Check connection property as fallback
if (workspaceId == 0 && Properties.TryGetValue("adbc.databricks.workspace_id", out string? workspaceIdProp))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in telemetry reporting, there is no need for workspace id, we should remove the logic

private bool enableComplexDatatypeSupport;
private Dictionary<string, string>? confOverlay;
internal string? StatementId { get; set; }
private QueryResult? _lastQueryResult; // Track last query result for telemetry chunk metrics
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please verify this.

…ID from response header

- Use DatabricksStatement.DatabricksBatchSizeDefault directly (changed to internal)
- Use ConnectTimeoutMilliseconds from base class instead of duplicate local const
- Extract workspace ID from x-databricks-org-id response header via new
  OrgIdCaptureHandler, replacing HTTP path parsing (works for SPOG + legacy URLs)
- Remove test using unsupported adbc.databricks.workspace_id property

Co-authored-by: Isaac
@jadewang-db
Copy link
Collaborator Author

Range-diff: stack/fix-telemetry-gaps-design (490d943 -> 19ae1c0)
csharp/src/DatabricksConnection.cs
@@ -1,6 +1,12 @@
 diff --git a/csharp/src/DatabricksConnection.cs b/csharp/src/DatabricksConnection.cs
 --- a/csharp/src/DatabricksConnection.cs
 +++ b/csharp/src/DatabricksConnection.cs
+         // Shared OAuth token provider for connection-wide token caching
+         private OAuthClientCredentialsProvider? _oauthTokenProvider;
+ 
++        // Captures x-databricks-org-id from HTTP response headers
++        private Http.OrgIdCaptureHandler? _orgIdCaptureHandler;
++
          // Telemetry fields
          private ITelemetryClient? _telemetryClient;
          private string? _host;
@@ -8,6 +14,18 @@
          internal TelemetrySessionContext? TelemetrySession { get; private set; }
  
          /// <summary>
+                 AddThriftErrorHandler = true
+             };
+ 
++            // Add org ID capture handler between base and the rest of the chain.
++            // This captures x-databricks-org-id from the first successful HTTP response
++            // (e.g., OpenSession), which works for both SPOG and legacy URLs.
++            _orgIdCaptureHandler = new Http.OrgIdCaptureHandler(config.BaseHandler);
++            config.BaseHandler = _orgIdCaptureHandler;
++
+             var result = HttpHandlerFactory.CreateHandlersWithTokenProvider(config);
+             _oauthTokenProvider = result.TokenProvider;
+             return result.Handler;
  
          protected override string DriverName => DatabricksDriverName;
  
@@ -158,13 +176,14 @@
                      true, // unauthed failure will be report separately
                      telemetryConfig);
  
-+                // Extract workspace ID from org ID in the HTTP path (e.g., ?o=12345)
++                // Extract workspace ID from x-databricks-org-id response header
++                // This works for both SPOG and legacy URLs, unlike parsing from the HTTP path.
 +                long workspaceId = 0;
-+                string? orgId = PropertyHelper.ParseOrgIdFromProperties(Properties);
++                string? orgId = _orgIdCaptureHandler?.CapturedOrgId;
 +                if (!string.IsNullOrEmpty(orgId) && long.TryParse(orgId, out long parsedOrgId))
 +                {
 +                    workspaceId = parsedOrgId;
-+                    activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.from_org_id",
++                    activity?.AddEvent(new ActivityEvent("telemetry.workspace_id.from_response_header",
 +                        tags: new ActivityTagsCollection { { "workspace_id", workspaceId } }));
 +                }
 +
@@ -221,13 +240,12 @@
 +        /// <returns>The batch size value.</returns>
 +        private int GetBatchSize()
 +        {
-+            const int DefaultBatchSize = 2000000; // DatabricksStatement.DatabricksBatchSizeDefault
 +            if (Properties.TryGetValue(ApacheParameters.BatchSize, out string? batchSizeStr) &&
 +                int.TryParse(batchSizeStr, out int batchSize))
 +            {
 +                return batchSize;
 +            }
-+            return DefaultBatchSize;
++            return (int)DatabricksStatement.DatabricksBatchSizeDefault;
 +        }
 +
 +        /// <summary>
@@ -236,13 +254,7 @@
 +        /// <returns>The socket timeout value in milliseconds.</returns>
 +        private int GetSocketTimeout()
 +        {
-+            const int DefaultConnectTimeoutMs = 30000; // Default from HiveServer2
-+            if (Properties.TryGetValue(SparkParameters.ConnectTimeoutMilliseconds, out string? timeoutStr) &&
-+                int.TryParse(timeoutStr, out int timeout))
-+            {
-+                return timeout;
-+            }
-+            return DefaultConnectTimeoutMs;
++            return ConnectTimeoutMilliseconds;
 +        }
 +
 +        /// <summary>
csharp/src/DatabricksStatement.cs
@@ -9,6 +9,14 @@
  using AdbcDrivers.Databricks.Result;
  using AdbcDrivers.Databricks.Telemetry;
  using AdbcDrivers.Databricks.Telemetry.Models;
+         // Databricks CloudFetch supports much larger batch sizes than standard Arrow batches (1024MB vs 10MB limit).
+         // Using 2M rows significantly reduces round trips for medium/large result sets compared to the base 50K default,
+         // improving query performance by reducing the number of FetchResults calls needed.
+-        private const long DatabricksBatchSizeDefault = 2000000;
++        internal const long DatabricksBatchSizeDefault = 2000000;
+         private const string QueryTagsKey = "query_tags";
+         private bool useCloudFetch;
+         private bool canDecompressLz4;
          private bool enableComplexDatatypeSupport;
          private Dictionary<string, string>? confOverlay;
          internal string? StatementId { get; set; }
csharp/src/Http/OrgIdCaptureHandler.cs
@@ -0,0 +1,69 @@
+diff --git a/csharp/src/Http/OrgIdCaptureHandler.cs b/csharp/src/Http/OrgIdCaptureHandler.cs
+new file mode 100644
+--- /dev/null
++++ b/csharp/src/Http/OrgIdCaptureHandler.cs
++/*
++* Copyright (c) 2025 ADBC Drivers Contributors
++*
++* This file has been modified from its original version, which is
++* under the Apache License:
++*
++* Licensed to the Apache Software Foundation (ASF) under one
++* or more contributor license agreements.  See the NOTICE file
++* distributed with this work for additional information
++* regarding copyright ownership.  The ASF licenses this file
++* to you under the Apache License, Version 2.0 (the
++* "License"); you may not use this file except in compliance
++* with the License.  You may obtain a copy of the License at
++*
++*    http://www.apache.org/licenses/LICENSE-2.0
++*
++* Unless required by applicable law or agreed to in writing, software
++* distributed under the License is distributed on an "AS IS" BASIS,
++* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++* See the License for the specific language governing permissions and
++* limitations under the License.
++*/
++
++using System.Linq;
++using System.Net.Http;
++using System.Threading;
++using System.Threading.Tasks;
++
++namespace AdbcDrivers.Databricks.Http
++{
++    /// <summary>
++    /// HTTP handler that captures the x-databricks-org-id header from the first successful response.
++    /// This org ID is used for telemetry workspace identification.
++    /// </summary>
++    internal class OrgIdCaptureHandler : DelegatingHandler
++    {
++        private string? _capturedOrgId;
++
++        /// <summary>
++        /// Gets the captured org ID from the response header, or null if not yet captured.
++        /// </summary>
++        public string? CapturedOrgId => _capturedOrgId;
++
++        public OrgIdCaptureHandler(HttpMessageHandler innerHandler)
++            : base(innerHandler)
++        {
++        }
++
++        protected override async Task<HttpResponseMessage> SendAsync(
++            HttpRequestMessage request,
++            CancellationToken cancellationToken)
++        {
++            HttpResponseMessage response = await base.SendAsync(request, cancellationToken);
++
++            if (_capturedOrgId == null &&
++                response.IsSuccessStatusCode &&
++                response.Headers.TryGetValues(DatabricksConstants.OrgIdHeader, out var headerValues))
++            {
++                _capturedOrgId = headerValues.FirstOrDefault();
++            }
++
++            return response;
++        }
++    }
++}
\ No newline at end of file
csharp/test/E2E/Telemetry/WorkspaceIdTests.cs
@@ -178,60 +178,7 @@
 +            }
 +        }
 +
-+        /// <summary>
-+        /// Tests that workspace_id can be explicitly set via connection property.
-+        /// This allows users to provide workspace ID when it's not available from server configuration.
-+        /// </summary>
-+        [SkippableFact]
-+        public async Task WorkspaceId_CanBeSet_ViaConnectionProperty()
-+        {
-+            CapturingTelemetryExporter exporter = null!;
-+            AdbcConnection? connection = null;
-+
-+            try
-+            {
-+                var properties = TestEnvironment.GetDriverParameters(TestConfiguration);
-+
-+                // Set explicit workspace ID via connection property
-+                long expectedWorkspaceId = 1234567890123456;
-+                properties["adbc.databricks.workspace_id"] = expectedWorkspaceId.ToString();
-+
-+                (connection, exporter) = TelemetryTestHelpers.CreateConnectionWithCapturingTelemetry(properties);
-+
-+                // Execute a simple query to trigger telemetry
-+                using var statement = connection.CreateStatement();
-+                statement.SqlQuery = "SELECT 1 AS test_value";
-+                var result = statement.ExecuteQuery();
-+                using var reader = result.Stream;
-+
-+                statement.Dispose();
-+
-+                // Wait for telemetry to be captured
-+                var logs = await TelemetryTestHelpers.WaitForTelemetryEvents(exporter, expectedCount: 1);
-+                TelemetryTestHelpers.AssertLogCount(logs, 1);
-+
-+                var frontendLog = logs[0];
-+
-+                // Assert workspace_id matches the explicit value from connection property
-+                // Note: If server config provides orgId, it takes precedence over connection property
-+                Assert.True(frontendLog.WorkspaceId == expectedWorkspaceId || frontendLog.WorkspaceId > 0,
-+                    $"workspace_id should either match explicit value ({expectedWorkspaceId}) or be from server config, but was {frontendLog.WorkspaceId}");
-+
-+                OutputHelper?.WriteLine($"✓ workspace_id: {frontendLog.WorkspaceId}");
-+                if (frontendLog.WorkspaceId == expectedWorkspaceId)
-+                {
-+                    OutputHelper?.WriteLine("  ✓ Matches explicit value from connection property");
-+                }
-+                else
-+                {
-+                    OutputHelper?.WriteLine("  ✓ Server configuration orgId took precedence over connection property");
-+                }
-+            }
-+            finally
-+            {
-+                connection?.Dispose();
-+                TelemetryTestHelpers.ClearExporterOverride();
-+            }
-+        }
++        // Note: adbc.databricks.workspace_id is not a supported connection property.
++        // Workspace ID is extracted from x-databricks-org-id response header.
 +    }
 +}
\ No newline at end of file

Reproduce locally: git range-diff 425a554..490d943 425a554..19ae1c0 | Disable: git config gitstack.push-range-diff false

…data commands

Statement-level metadata commands (getcatalogs, gettables, getcolumns, etc.)
executed via DatabricksStatement.ExecuteQuery were incorrectly tagged as
StatementType.Query/OperationType.ExecuteStatement. This fix correctly emits
StatementType.Metadata with the appropriate OperationType (ListCatalogs,
ListTables, ListColumns, etc.), aligning with the connection-level GetObjects
telemetry. The two paths remain distinguishable via sql_statement_id (populated
for statement path, empty for GetObjects path).

Co-authored-by: Isaac
@jadewang-db
Copy link
Collaborator Author

Range-diff: stack/fix-telemetry-gaps-design (19ae1c0 -> 8da5457)
csharp/src/DatabricksStatement.cs
@@ -30,9 +30,53 @@
              ctx.StatementType = statementType;
              ctx.IsCompressed = canDecompressLz4;
 +            ctx.IsInternalCall = IsInternalCall;
++            return ctx;
++        }
++
++        /// <summary>
++        /// Maps a metadata SQL command to the corresponding telemetry operation type.
++        /// Returns null if the command is not a recognized metadata command.
++        /// </summary>
++        internal static OperationType? GetMetadataOperationType(string? sqlQuery)
++        {
++            return sqlQuery?.ToLowerInvariant() switch
++            {
++                "getcatalogs" => OperationType.ListCatalogs,
++                "getschemas" => OperationType.ListSchemas,
++                "gettables" => OperationType.ListTables,
++                "getcolumns" or "getcolumnsextended" => OperationType.ListColumns,
++                "gettabletypes" => OperationType.ListTableTypes,
++                "getprimarykeys" => OperationType.ListPrimaryKeys,
++                "getcrossreference" => OperationType.ListCrossReferences,
++                _ => null
++            };
++        }
++
++        private StatementTelemetryContext? CreateMetadataTelemetryContext()
++        {
++            var session = ((DatabricksConnection)Connection).TelemetrySession;
++            if (session?.TelemetryClient == null) return null;
++
++            var operationType = GetMetadataOperationType(SqlQuery) ?? OperationType.Unspecified;
++
++            var ctx = new StatementTelemetryContext(session);
++            ctx.OperationType = operationType;
++            ctx.StatementType = Telemetry.Proto.Statement.Types.Type.Metadata;
++            ctx.ResultFormat = ExecutionResultFormat.InlineArrow;
++            ctx.IsCompressed = false;
++            ctx.IsInternalCall = IsInternalCall;
              return ctx;
          }
  
+ 
+         public override QueryResult ExecuteQuery()
+         {
+-            var ctx = CreateTelemetryContext(Telemetry.Proto.Statement.Types.Type.Query);
++            var ctx = IsMetadataCommand
++                ? CreateMetadataTelemetryContext()
++                : CreateTelemetryContext(Telemetry.Proto.Statement.Types.Type.Query);
+             if (ctx == null) return base.ExecuteQuery();
+ 
              try
              {
                  QueryResult result = base.ExecuteQuery();
@@ -54,6 +98,13 @@
          }
  
          public override async ValueTask<QueryResult> ExecuteQueryAsync()
+         {
+-            var ctx = CreateTelemetryContext(Telemetry.Proto.Statement.Types.Type.Query);
++            var ctx = IsMetadataCommand
++                ? CreateMetadataTelemetryContext()
++                : CreateTelemetryContext(Telemetry.Proto.Statement.Types.Type.Query);
+             if (ctx == null) return await base.ExecuteQueryAsync();
+ 
              try
              {
                  QueryResult result = await base.ExecuteQueryAsync();
csharp/test/E2E/Telemetry/StatementMetadataTelemetryTests.cs
@@ -0,0 +1,251 @@
+diff --git a/csharp/test/E2E/Telemetry/StatementMetadataTelemetryTests.cs b/csharp/test/E2E/Telemetry/StatementMetadataTelemetryTests.cs
+new file mode 100644
+--- /dev/null
++++ b/csharp/test/E2E/Telemetry/StatementMetadataTelemetryTests.cs
++/*
++* Copyright (c) 2025 ADBC Drivers Contributors
++*
++* Licensed under the Apache License, Version 2.0 (the "License");
++* you may not use this file except in compliance with the License.
++* You may obtain a copy of the License at
++*
++*     http://www.apache.org/licenses/LICENSE-2.0
++*
++* Unless required by applicable law or agreed to in writing, software
++* distributed under the License is distributed on an "AS IS" BASIS,
++* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++* See the License for the specific language governing permissions and
++* limitations under the License.
++*/
++
++using System;
++using System.Collections.Generic;
++using System.Linq;
++using System.Threading.Tasks;
++using AdbcDrivers.Databricks.Telemetry;
++using AdbcDrivers.HiveServer2;
++using Apache.Arrow.Adbc;
++using Apache.Arrow.Adbc.Tests;
++using Xunit;
++using Xunit.Abstractions;
++using OperationType = AdbcDrivers.Databricks.Telemetry.Proto.Operation.Types.Type;
++using StatementType = AdbcDrivers.Databricks.Telemetry.Proto.Statement.Types.Type;
++
++namespace AdbcDrivers.Databricks.Tests.E2E.Telemetry
++{
++    /// <summary>
++    /// E2E tests for statement-level metadata command telemetry.
++    /// Validates that metadata commands executed via DatabricksStatement.ExecuteQuery
++    /// (e.g., SqlQuery = "getcatalogs") emit telemetry with correct StatementType.Metadata
++    /// and the appropriate OperationType, rather than StatementType.Query/OperationType.ExecuteStatement.
++    /// </summary>
++    public class StatementMetadataTelemetryTests : TestBase<DatabricksTestConfiguration, DatabricksTestEnvironment>
++    {
++        // Filters to scope metadata queries and avoid MaxMessageSize errors
++        private const string TestCatalog = "main";
++        private const string TestSchema = "adbc_testing";
++        private const string TestTable = "all_column_types";
++
++        public StatementMetadataTelemetryTests(ITestOutputHelper? outputHelper)
++            : base(outputHelper, new DatabricksTestEnvironment.Factory())
++        {
++            Skip.IfNot(Utils.CanExecuteTestConfig(TestConfigVariable));
++        }
++
++        [SkippableFact]
++        public async Task Telemetry_StatementGetCatalogs_EmitsMetadataWithListCatalogs()
++        {
++            await AssertStatementMetadataTelemetry(
++                command: "getcatalogs",
++                expectedOperationType: OperationType.ListCatalogs);
++        }
++
++        [SkippableFact]
++        public async Task Telemetry_StatementGetSchemas_EmitsMetadataWithListSchemas()
++        {
++            await AssertStatementMetadataTelemetry(
++                command: "getschemas",
++                expectedOperationType: OperationType.ListSchemas,
++                options: new Dictionary<string, string>
++                {
++                    [ApacheParameters.CatalogName] = TestCatalog,
++                });
++        }
++
++        [SkippableFact]
++        public async Task Telemetry_StatementGetTables_EmitsMetadataWithListTables()
++        {
++            await AssertStatementMetadataTelemetry(
++                command: "gettables",
++                expectedOperationType: OperationType.ListTables,
++                options: new Dictionary<string, string>
++                {
++                    [ApacheParameters.CatalogName] = TestCatalog,
++                    [ApacheParameters.SchemaName] = TestSchema,
++                });
++        }
++
++        [SkippableFact]
++        public async Task Telemetry_StatementGetColumns_EmitsMetadataWithListColumns()
++        {
++            await AssertStatementMetadataTelemetry(
++                command: "getcolumns",
++                expectedOperationType: OperationType.ListColumns,
++                options: new Dictionary<string, string>
++                {
++                    [ApacheParameters.CatalogName] = TestCatalog,
++                    [ApacheParameters.SchemaName] = TestSchema,
++                    [ApacheParameters.TableName] = TestTable,
++                });
++        }
++
++        [SkippableFact]
++        public async Task Telemetry_StatementMetadata_AllCommands_EmitCorrectOperationType()
++        {
++            CapturingTelemetryExporter exporter = null!;
++            AdbcConnection? connection = null;
++
++            try
++            {
++                var properties = TestEnvironment.GetDriverParameters(TestConfiguration);
++                (connection, exporter) = TelemetryTestHelpers.CreateConnectionWithCapturingTelemetry(properties);
++
++                var commandMappings = new (string Command, OperationType ExpectedOp, Dictionary<string, string>? Options)[]
++                {
++                    ("getcatalogs", OperationType.ListCatalogs, null),
++                    ("getschemas", OperationType.ListSchemas, new Dictionary<string, string>
++                    {
++                        [ApacheParameters.CatalogName] = TestCatalog,
++                    }),
++                    ("gettables", OperationType.ListTables, new Dictionary<string, string>
++                    {
++                        [ApacheParameters.CatalogName] = TestCatalog,
++                        [ApacheParameters.SchemaName] = TestSchema,
++                    }),
++                    ("getcolumns", OperationType.ListColumns, new Dictionary<string, string>
++                    {
++                        [ApacheParameters.CatalogName] = TestCatalog,
++                        [ApacheParameters.SchemaName] = TestSchema,
++                        [ApacheParameters.TableName] = TestTable,
++                    }),
++                };
++
++                foreach (var mapping in commandMappings)
++                {
++                    exporter.Reset();
++
++                    // Explicit using block so statement is disposed (and telemetry emitted) before we check
++                    using (var statement = connection.CreateStatement())
++                    {
++                        statement.SetOption(ApacheParameters.IsMetadataCommand, "true");
++                        statement.SqlQuery = mapping.Command;
++
++                        if (mapping.Options != null)
++                        {
++                            foreach (var opt in mapping.Options)
++                            {
++                                statement.SetOption(opt.Key, opt.Value);
++                            }
++                        }
++
++                        var result = statement.ExecuteQuery();
++                        result.Stream?.Dispose();
++                    }
++
++                    // Flush telemetry after statement disposal
++                    if (connection is DatabricksConnection dbConn && dbConn.TelemetrySession?.TelemetryClient != null)
++                    {
++                        await dbConn.TelemetrySession.TelemetryClient.FlushAsync(default);
++                    }
++
++                    var logs = await TelemetryTestHelpers.WaitForTelemetryEvents(exporter, expectedCount: 1, timeoutMs: 5000);
++                    Assert.NotEmpty(logs);
++
++                    var log = TelemetryTestHelpers.FindLog(logs, proto =>
++                        proto.SqlOperation?.OperationDetail?.OperationType == mapping.ExpectedOp);
++
++                    Assert.NotNull(log);
++
++                    var protoLog = TelemetryTestHelpers.GetProtoLog(log);
++                    Assert.Equal(StatementType.Metadata, protoLog.SqlOperation.StatementType);
++                    Assert.Equal(mapping.ExpectedOp, protoLog.SqlOperation.OperationDetail.OperationType);
++                }
++            }
++            finally
++            {
++                connection?.Dispose();
++                TelemetryTestHelpers.ClearExporterOverride();
++            }
++        }
++
++        /// <summary>
++        /// Helper method to test a single statement-level metadata command emits the correct telemetry.
++        /// </summary>
++        private async Task AssertStatementMetadataTelemetry(
++            string command,
++            OperationType expectedOperationType,
++            Dictionary<string, string>? options = null)
++        {
++            CapturingTelemetryExporter exporter = null!;
++            AdbcConnection? connection = null;
++
++            try
++            {
++                var properties = TestEnvironment.GetDriverParameters(TestConfiguration);
++                (connection, exporter) = TelemetryTestHelpers.CreateConnectionWithCapturingTelemetry(properties);
++
++                // Execute metadata command via statement path
++                // Explicit using block so statement is disposed (and telemetry emitted) before we check
++                using (var statement = connection.CreateStatement())
++                {
++                    statement.SetOption(ApacheParameters.IsMetadataCommand, "true");
++                    statement.SqlQuery = command;
++
++                    if (options != null)
++                    {
++                        foreach (var opt in options)
++                        {
++                            statement.SetOption(opt.Key, opt.Value);
++                        }
++                    }
++
++                    var result = statement.ExecuteQuery();
++                    result.Stream?.Dispose();
++                }
++
++                // Flush telemetry after statement disposal
++                if (connection is DatabricksConnection dbConn && dbConn.TelemetrySession?.TelemetryClient != null)
++                {
++                    await dbConn.TelemetrySession.TelemetryClient.FlushAsync(default);
++                }
++
++                // Wait for telemetry events
++                var logs = await TelemetryTestHelpers.WaitForTelemetryEvents(exporter, expectedCount: 1, timeoutMs: 5000);
++
++                Assert.NotEmpty(logs);
++
++                // Find the metadata telemetry log with correct operation type
++                var log = TelemetryTestHelpers.FindLog(logs, proto =>
++                    proto.SqlOperation?.OperationDetail?.OperationType == expectedOperationType);
++
++                Assert.NotNull(log);
++
++                var protoLog = TelemetryTestHelpers.GetProtoLog(log);
++
++                // Verify statement type is METADATA (not QUERY)
++                Assert.Equal(StatementType.Metadata, protoLog.SqlOperation.StatementType);
++
++                // Verify operation type matches the metadata command
++                Assert.Equal(expectedOperationType, protoLog.SqlOperation.OperationDetail.OperationType);
++
++                // Verify basic session-level telemetry fields are populated
++                TelemetryTestHelpers.AssertSessionFieldsPopulated(protoLog);
++            }
++            finally
++            {
++                connection?.Dispose();
++                TelemetryTestHelpers.ClearExporterOverride();
++            }
++        }
++    }
++}
\ No newline at end of file
csharp/test/Unit/DatabricksStatementUnitTests.cs
@@ -0,0 +1,48 @@
+diff --git a/csharp/test/Unit/DatabricksStatementUnitTests.cs b/csharp/test/Unit/DatabricksStatementUnitTests.cs
+--- a/csharp/test/Unit/DatabricksStatementUnitTests.cs
++++ b/csharp/test/Unit/DatabricksStatementUnitTests.cs
+ using AdbcDrivers.HiveServer2.Spark;
+ using AdbcDrivers.Databricks;
+ using Xunit;
++using OperationType = AdbcDrivers.Databricks.Telemetry.Proto.Operation.Types.Type;
+ 
+ namespace AdbcDrivers.Databricks.Tests.Unit
+ {
+             var confOverlay = GetConfOverlay(statement);
+             Assert.Null(confOverlay);
+         }
++
++        [Theory]
++        [InlineData("getcatalogs", OperationType.ListCatalogs)]
++        [InlineData("getschemas", OperationType.ListSchemas)]
++        [InlineData("gettables", OperationType.ListTables)]
++        [InlineData("getcolumns", OperationType.ListColumns)]
++        [InlineData("getcolumnsextended", OperationType.ListColumns)]
++        [InlineData("gettabletypes", OperationType.ListTableTypes)]
++        [InlineData("getprimarykeys", OperationType.ListPrimaryKeys)]
++        [InlineData("getcrossreference", OperationType.ListCrossReferences)]
++        public void GetMetadataOperationType_ReturnsCorrectType(string command, OperationType expected)
++        {
++            Assert.Equal(expected, DatabricksStatement.GetMetadataOperationType(command));
++        }
++
++        [Theory]
++        [InlineData(null)]
++        [InlineData("")]
++        [InlineData("SELECT 1")]
++        [InlineData("unknown_command")]
++        public void GetMetadataOperationType_ReturnsNull_ForNonMetadataCommands(string? command)
++        {
++            Assert.Null(DatabricksStatement.GetMetadataOperationType(command));
++        }
++
++        [Theory]
++        [InlineData("GETCATALOGS")]
++        [InlineData("GetCatalogs")]
++        [InlineData("GetTables")]
++        public void GetMetadataOperationType_IsCaseInsensitive(string command)
++        {
++            Assert.NotNull(DatabricksStatement.GetMetadataOperationType(command));
++        }
+     }
+ }
\ No newline at end of file

Reproduce locally: git range-diff 425a554..19ae1c0 425a554..8da5457 | Disable: git config gitstack.push-range-diff false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants