You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because many proto files utlize the repeated NIComplexF32 custom data type there is significant slow-down when transferring large data sets (e.g. 5 GSps, few ms of data). For newer, high sample rate devices it is not always possible to avoid these transfers. As an example, when transfering 10 ms of 5GSps IQ data using FetchIQSingleRecordComplexF32Request (nirfsa.proto) this takes ~5 seconds to fetch from the HW and an additional ~10 seconds (or more) to convert to a 1D array of Complex Numbers (to make the data type compatible with numpy). By comparison, the process to return a 1D array of CSGL numbers is only about 500 ms in the LabVIEW RFSA fetch shipping example, not 15-20 seconds!
Note
grpc-device provides .proto files and a server to support calling NI driver APIs over gRPC. Can you reproduce your problem by calling the driver API directly?
RFSA Acquire IQ Data in Blocks.vi shipping example VI as comparison.
Expected Behavior
Current Behavior
Possible Solution
Change the data types to two repeated Floats (one for I and one for Q data). That should help a little bit. Otherwise, investigate sideband IP and/or streams incorporated into the grpc-device api's that may have large data transfers.
Context
Your Environment
Operating system and version: [e.g. Windows 11 24H2, Ubuntu Linux 24.04, NI Linux RT 2024 Q4]
All NI driver versions: [e.g. NI-DAQmx 2024 Q4, NI-FGEN 2024 Q4]
Bug Report
Because many proto files utlize the repeated NIComplexF32 custom data type there is significant slow-down when transferring large data sets (e.g. 5 GSps, few ms of data). For newer, high sample rate devices it is not always possible to avoid these transfers. As an example, when transfering 10 ms of 5GSps IQ data using FetchIQSingleRecordComplexF32Request (nirfsa.proto) this takes ~5 seconds to fetch from the HW and an additional ~10 seconds (or more) to convert to a 1D array of Complex Numbers (to make the data type compatible with numpy). By comparison, the process to return a 1D array of CSGL numbers is only about 500 ms in the LabVIEW RFSA fetch shipping example, not 15-20 seconds!
Note
grpc-device provides .proto files and a server to support calling NI driver APIs over gRPC. Can you reproduce your problem by calling the driver API directly?
Repo or Code Sample
[NI] https://dev.azure.com/ni/DevCentral/_git/mst-adg-rf?path=/General/Examples/gRPC%20RF%20Demos/src/Python/nirfsa-acq-to-file-threaded.py&version=GBmain&_a=contents
RFSA Acquire IQ Data in Blocks.vi shipping example VI as comparison.
Expected Behavior
Current Behavior
Possible Solution
Change the data types to two repeated Floats (one for I and one for Q data). That should help a little bit. Otherwise, investigate sideband IP and/or streams incorporated into the grpc-device api's that may have large data transfers.
Context
Your Environment
grpc-deviceversion: [e.g. 2.11.0]AB#3179219