perf_events: Fix unincremented buffer base on partial copy
authorFrederic Weisbecker <fweisbec@gmail.com>
Thu, 27 May 2010 19:34:58 +0000 (21:34 +0200)
committerIngo Molnar <mingo@elte.hu>
Mon, 31 May 2010 06:46:10 +0000 (08:46 +0200)
If a sample size crosses to the next page boundary, the copy
will be made in more than one step. However we forget to advance
the source offset for the next copy, leading to unexpected double
copies that completely mess up the traces.

This fixes various kinds of bad traces that have irrelevant
data inside, as an example:

geany-4979  [001]  5758.077775: sched_switch: prev_comm=! prev_pid=121
prev_prio=0 prev_state=S|D|Z|X|x ==> next_comm= next_pid=7497072
next_prio=0

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1274988898-5639-1-git-send-regression-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
kernel/perf_event.c

index 42a0e91..858f56f 100644 (file)
@@ -3064,6 +3064,7 @@ __always_inline void perf_output_copy(struct perf_output_handle *handle,
 
                len -= size;
                handle->addr += size;
+               buf += size;
                handle->size -= size;
                if (!handle->size) {
                        struct perf_mmap_data *data = handle->data;