Show multiple user PF's in the output#3
Conversation
9e2dde9 to
adb47a7
Compare
dbd19c0 to
6c07d5c
Compare
| bpf_map_delete_elem(&tgid_cr2, &tgid); | ||
| event->pf_count = 0; | ||
| #ifdef TRACE_PF_CR2 | ||
| u32 tgid = task->tgid; |
There was a problem hiding this comment.
I'm not sure if we should use the process ID (tgid) or rather the thread ID (pid) for the key of the map... on the one hand, the map can be smaller. On the other hand, we'd need to record which thread has generated the PF, and we might rotate through the ring buffer to quickly.
I'm also not sure if we'd need to use some form of locking if multiple threads can write into the ring buffer.
There was a problem hiding this comment.
Yeah we'd need locking: https://docs.ebpf.io/linux/helper-function/bpf_map_lookup_elem/
I just don't know if per-thread would be sufficient, or if we'd need per-CPU.
|
|
||
| // By default is commented: a lot of #PF events are hit | ||
| // so enable only if it is acceptable. | ||
| // #define TRACE_PF_CR2 |
There was a problem hiding this comment.
I think we should enable this, since it doesn't seem to have much of a performance impact.
|
|
||
|
|
||
| #define MAX_LBR_ENTRIES 32 | ||
| #define MAX_USER_PF_ENTRIES 16 |
There was a problem hiding this comment.
16 is fine if the ring buffer is for a single thread, but I fear that with dozens of threads, there might be too many PF, so any interesting one might be rotated out by the time we land in signal_generate. But due to the locking requirement (see my comment in the .bpf.c file) I think we should split this into a per-thread data structure (or per-CPU but while recording the pid, not sure...).
work-robot
left a comment
There was a problem hiding this comment.
We need to use a finer-granular key for the PF map, due to race conditions.
No description provided.