a68fd6a6cd
Include trace_augment.h for TRACE_AUG_MAX_BUF, so that BPF reads TRACE_AUG_MAX_BUF bytes of buffer maximum. Determine what type of argument and how many bytes to read from user space, us ing the value in the beauty_map. This is the relation of parameter type and its corres ponding value in the beauty map, and how many bytes we read eventually: string: 1 -> size of string (till null) struct: size of struct -> size of struct buffer: -1 * (index of paired len) -> value of paired len (maximum: TRACE_AUG_ MAX_BUF) After reading from user space, we output the augmented data using bpf_perf_event_output(). If the struct augmenter, augment_sys_enter() failed, we fall back to using bpf_tail_call(). I have to make the payload 6 times the size of augmented_arg, to pass the BPF verifier. Signed-off-by: Howard Chu <howardchu95@gmail.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: https://lore.kernel.org/r/20240815013626.935097-10-howardchu95@gmail.com Link: https://lore.kernel.org/r/20240824163322.60796-7-howardchu95@gmail.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> |
||
---|---|---|
.. | ||
vmlinux | ||
.gitignore | ||
augmented_raw_syscalls.bpf.c | ||
bench_uprobe.bpf.c | ||
bperf_cgroup.bpf.c | ||
bperf_follower.bpf.c | ||
bperf_leader.bpf.c | ||
bperf_u.h | ||
bpf_prog_profiler.bpf.c | ||
func_latency.bpf.c | ||
kwork_top.bpf.c | ||
kwork_trace.bpf.c | ||
lock_contention.bpf.c | ||
lock_data.h | ||
off_cpu.bpf.c | ||
sample_filter.bpf.c | ||
sample-filter.h |