feat: add TimeSlave process and libTSClient shm IPC channel#8
feat: add TimeSlave process and libTSClient shm IPC channel#8gordon9901 wants to merge 12 commits intoeclipse-score:mainfrom
Conversation
License Check Results🚀 The license check job ran with the Bazel command: bazel run //:license-checkStatus: Click to expand output |
|
The created documentation from the pull request is available at: docu-html |
BjoernAtBosch
left a comment
There was a problem hiding this comment.
Some comments for now
| const TmvT t3c = TmvT{t3.ns + c1.ns + c2.ns}; | ||
| // t4 = BPF_T_BINTIME (PHC) receive timestamp of Pdelay_Resp (main BPF fd) | ||
| const TmvT t4 = resp_.recvHardwareTS; | ||
|
|
||
| const std::int64_t delay = ((t2.ns - t1.ns) + (t4.ns - t3c.ns)) / 2LL; | ||
|
|
||
| if (delay < 0) | ||
| return; | ||
|
|
||
| PDelayResult r{}; | ||
| r.path_delay_ns = delay; | ||
| r.valid = true; | ||
|
|
There was a problem hiding this comment.
Shouldn't we move all kind of evaluation to the TimeDaemon?
That component will be later ASIL certifiable and should do all kind of timestamp validations ...
TimeSlave cannot be certified (i.e. is QM only). It could induce errors due to interference.
There was a problem hiding this comment.
Agreed. GptpIpcPDelayData now carries all raw timestamps (t1/t2/t3/t4), so TimeDaemon can recompute and validate independently.
The current checks in TimeSlave are kept as QM best-effort guards to avoid writing invalid data to shared memory. I suggest discussing the ASIL-B validation migration to TimeDaemon in a meeting first; once agreed, we track it as a separate issue, outside this PR. Does that work for you?
There was a problem hiding this comment.
Cool. Yes, valid remark. Accepted
| * @param buf Buffer large enough to hold existing payload plus header. | ||
| * @param buf_len In/out: payload length → frame length after prepend. | ||
| * @param src_mac Source MAC address (should be the port's own MAC). | ||
| * @return true on success, false if the buffer would overflow. |
There was a problem hiding this comment.
Param buf_capacity is not described
There was a problem hiding this comment.
Fixed. Added @param buf_capacity description in the Doxygen comments of frame_codec.h.
| std::lock_guard<std::mutex> lk(mutex_); | ||
| resp_ = msg; | ||
| } | ||
|
|
||
| void PeerDelayMeasurer::OnResponseFollowUp(const PTPMessage& msg) | ||
| { | ||
|
|
||
| std::lock_guard<std::mutex> lk(mutex_); | ||
| resp_fup_ = msg; | ||
| ComputeAndStoreUnlocked(); |
There was a problem hiding this comment.
In those two functions the received messages are just set without checking if the contained sequence id is the expected one. Can we rely on the assumption, that messages of e.g. an earlier request could be received after the next request?
There was a problem hiding this comment.
Fixed. Added early filtering in OnResponse() and OnResponseFollowUp(): messages with non-matching sequence IDs (vs req_.ptpHdr.sequenceId) are ignored and do not update state. The existing 3-way seq ID consistency check in ComputeAndStoreUnlocked() is kept as a second line of defense.
|
|
||
| int PeerDelayMeasurer::SendRequest(IRawSocket& socket) | ||
| { | ||
| PTPMessage req{}; |
There was a problem hiding this comment.
I guess we don't have recognition, if either no responses or multiple responses to a single request are received (to detect non-time-aware switches)
There was a problem hiding this comment.
Agreed. This is now handled as follows: a resp_count_ counter has been introduced and is reset to 0 together with req_ under lock in SendRequest(). OnResponse() increments it only for matching sequence IDs. If resp_count_ > 1, ComputeAndStoreUnlocked() returns early without producing a result, covering the IEEE 802.1AS non-time-aware switch detection case. The no-response case is implicitly handled by the existing sequence ID consistency check (no computation triggered, last valid result retained).
There was a problem hiding this comment.
Yes, I think that's fine for now. I guess later we will need a more detailed concept how to deal with/behave in that kind of scenarios.
|
|
||
| int PeerDelayMeasurer::SendRequest(IRawSocket& socket) | ||
| { | ||
| PTPMessage req{}; |
There was a problem hiding this comment.
Yes, I think that's fine for now. I guess later we will need a more detailed concept how to deal with/behave in that kind of scenarios.
| const TmvT t3c = TmvT{t3.ns + c1.ns + c2.ns}; | ||
| // t4 = BPF_T_BINTIME (PHC) receive timestamp of Pdelay_Resp (main BPF fd) | ||
| const TmvT t4 = resp_.recvHardwareTS; | ||
|
|
||
| const std::int64_t delay = ((t2.ns - t1.ns) + (t4.ns - t3c.ns)) / 2LL; | ||
|
|
||
| if (delay < 0) | ||
| return; | ||
|
|
||
| PDelayResult r{}; | ||
| r.path_delay_ns = delay; | ||
| r.valid = true; | ||
|
|
There was a problem hiding this comment.
Cool. Yes, valid remark. Accepted
feat: introduce TimeSlave process and libTSClient IPC channel
Summary
This PR splits the gPTP protocol stack out of
TimeDaemoninto a new standalone process —TimeSlave— and introduces a lightweight shared-memory IPC library (libTSClient) so thatTimeDaemoncan consumePtpTimeInfosnapshots fromTimeSlavewithout running the raw-socket protocol itself.Changes
score/TimeSlave/(new)A self-contained gPTP slave process (
score::ts::TimeSlave) that:GptpEngine(existingRxThread+PdelayThreadlogic) to participate in IEEE 802.1AS protocol.raw_socket,network_identity,phc_adjuster).probe) and recording (recorder) capabilities.GptpIpcDatasnapshots to shared memory viaGptpIpcPublisher.score/libTSClient/(new)A process-boundary IPC library providing:
GptpIpcData/GptpIpcStatus/GptpIpcSyncFupData/GptpIpcPDelayData(
gptp_ipc_data.h, namespacescore::ts) — IPC-layer data types conveying PTP time, status, and measurement fields.GptpIpcRegion(
gptp_ipc_channel.h) — 64-byte cache-line-aligned shared memory layout with a seqlock (seq/seq_confirm) for lock-free single-writer / multi-reader access.Default channel name:
kGptpIpcName = "/gptp_ptp_info"(namespacescore::ts::details).GptpIpcPublisher— creates and writes to the POSIX shared memory segment (used byTimeSlave).GptpIpcReceiver— opens and reads from the segment with seqlock retry (used byTimeDaemonviaGPTPRealMachine).gptp_ipc.h— convenience header aggregating all of the above.score/TimeDaemon/code/ptp_machine/real/(new)details::ShmPTPEngine(
real/details/shm_ptp_engine.h, namespacescore::td::details) — aPTPEngine-compatible adapter backed byGptpIpcReceiver; convertsGptpIpcData(libTSClient) toPtpTimeInfo(TimeDaemon).GPTPRealMachine— type alias forPTPMachine<details::ShmPTPEngine>, slotting cleanly into the existingTimeDaemonpipeline.CreateGPTPRealMachine(name, ipc_name)— factory function for convenient construction;ipc_namedefaults tokGptpIpcName.score/TimeDaemon/code/common/data_types/BUILD,score/TimeDaemon/code/data_types/BUILD(modified)Widened Bazel visibility from
//score/TimeDaemon:__subpackages__to//score:__subpackages__so thatTimeSlaveandlibTSClientcan reuse shared data types (PtpTimeInfo, etc.) without duplication.score/TimeDaemon/code/ptp_machine/BUILD(modified)Added
//score/TimeDaemon/code/ptp_machine/real:unit_test_suiteto the aggregated test target.Architecture
Ticket:eclipse-score/score#2691