023695d96e
This patch adds the ability to filter monitoring based on container groups (cgroups) for both perf stat and perf record. It is possible to monitor multiple cgroup in parallel. There is one cgroup per event. The cgroups to monitor are passed via a new -G option followed by a comma separated list of cgroup names. The cgroup filesystem has to be mounted. Given a cgroup name, the perf tool finds the corresponding directory in the cgroup filesystem and opens it. It then passes that file descriptor to the kernel. Example: $ perf stat -B -a -e cycles:u,cycles:u,cycles:u -G test1,,test2 -- sleep 1 Performance counter stats for 'sleep 1': 2,368,667,414 cycles test1 2,369,661,459 cycles <not counted> cycles test2 1.001856890 seconds time elapsed Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <4d590290.825bdf0a.7d0a.4890@mx.google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
118 lines
3.1 KiB
Plaintext
118 lines
3.1 KiB
Plaintext
perf-stat(1)
|
|
============
|
|
|
|
NAME
|
|
----
|
|
perf-stat - Run a command and gather performance counter statistics
|
|
|
|
SYNOPSIS
|
|
--------
|
|
[verse]
|
|
'perf stat' [-e <EVENT> | --event=EVENT] [-a] <command>
|
|
'perf stat' [-e <EVENT> | --event=EVENT] [-a] -- <command> [<options>]
|
|
|
|
DESCRIPTION
|
|
-----------
|
|
This command runs a command and gathers performance counter statistics
|
|
from it.
|
|
|
|
|
|
OPTIONS
|
|
-------
|
|
<command>...::
|
|
Any command you can specify in a shell.
|
|
|
|
|
|
-e::
|
|
--event=::
|
|
Select the PMU event. Selection can be a symbolic event name
|
|
(use 'perf list' to list all events) or a raw PMU
|
|
event (eventsel+umask) in the form of rNNN where NNN is a
|
|
hexadecimal event descriptor.
|
|
|
|
-i::
|
|
--no-inherit::
|
|
child tasks do not inherit counters
|
|
-p::
|
|
--pid=<pid>::
|
|
stat events on existing process id
|
|
|
|
-t::
|
|
--tid=<tid>::
|
|
stat events on existing thread id
|
|
|
|
|
|
-a::
|
|
--all-cpus::
|
|
system-wide collection from all CPUs
|
|
|
|
-c::
|
|
--scale::
|
|
scale/normalize counter values
|
|
|
|
-r::
|
|
--repeat=<n>::
|
|
repeat command and print average + stddev (max: 100)
|
|
|
|
-B::
|
|
--big-num::
|
|
print large numbers with thousands' separators according to locale
|
|
|
|
-C::
|
|
--cpu=::
|
|
Count only on the list of CPUs provided. Multiple CPUs can be provided as a
|
|
comma-separated list with no space: 0,1. Ranges of CPUs are specified with -: 0-2.
|
|
In per-thread mode, this option is ignored. The -a option is still necessary
|
|
to activate system-wide monitoring. Default is to count on all CPUs.
|
|
|
|
-A::
|
|
--no-aggr::
|
|
Do not aggregate counts across all monitored CPUs in system-wide mode (-a).
|
|
This option is only valid in system-wide mode.
|
|
|
|
-n::
|
|
--null::
|
|
null run - don't start any counters
|
|
|
|
-v::
|
|
--verbose::
|
|
be more verbose (show counter open errors, etc)
|
|
|
|
-x SEP::
|
|
--field-separator SEP::
|
|
print counts using a CSV-style output to make it easy to import directly into
|
|
spreadsheets. Columns are separated by the string specified in SEP.
|
|
|
|
-G name::
|
|
--cgroup name::
|
|
monitor only in the container (cgroup) called "name". This option is available only
|
|
in per-cpu mode. The cgroup filesystem must be mounted. All threads belonging to
|
|
container "name" are monitored when they run on the monitored CPUs. Multiple cgroups
|
|
can be provided. Each cgroup is applied to the corresponding event, i.e., first cgroup
|
|
to first event, second cgroup to second event and so on. It is possible to provide
|
|
an empty cgroup (monitor all the time) using, e.g., -G foo,,bar. Cgroups must have
|
|
corresponding events, i.e., they always refer to events defined earlier on the command
|
|
line.
|
|
|
|
EXAMPLES
|
|
--------
|
|
|
|
$ perf stat -- make -j
|
|
|
|
Performance counter stats for 'make -j':
|
|
|
|
8117.370256 task clock ticks # 11.281 CPU utilization factor
|
|
678 context switches # 0.000 M/sec
|
|
133 CPU migrations # 0.000 M/sec
|
|
235724 pagefaults # 0.029 M/sec
|
|
24821162526 CPU cycles # 3057.784 M/sec
|
|
18687303457 instructions # 2302.138 M/sec
|
|
172158895 cache references # 21.209 M/sec
|
|
27075259 cache misses # 3.335 M/sec
|
|
|
|
Wall-clock time elapsed: 719.554352 msecs
|
|
|
|
SEE ALSO
|
|
--------
|
|
linkperf:perf-top[1], linkperf:perf-list[1]
|