|| ||Yoshihiro YUNOMAE <firstname.lastname@example.org> |
|| ||email@example.com |
|| ||[RFC PATCH 0/6] virtio-trace: Support virtio-trace |
|| ||Tue, 24 Jul 2012 11:36:57 +0900|
|| ||Herbert Xu <firstname.lastname@example.org>, Arnd Bergmann <email@example.com>,
Frederic Weisbecker <firstname.lastname@example.org>,
Borislav Petkov <email@example.com>, firstname.lastname@example.org,
"Franch Ch. Eigler" <email@example.com>, Ingo Molnar <firstname.lastname@example.org>,
Mathieu Desnoyers <email@example.com>,
Steven Rostedt <firstname.lastname@example.org>,
Anthony Liguori <email@example.com>,
Greg Kroah-Hartman <firstname.lastname@example.org>,
Amit Shah <email@example.com>|
|| ||Article, Thread
The following patch set provides a low-overhead system for collecting kernel
tracing data of guests by a host in a virtualization environment.
A guest OS generally shares some devices with other guests or a host, so
reasons of any problems occurring in a guest may be from other guests or a host.
Then, to collect some tracing data of a number of guests and a host is needed
when some problems occur in a virtualization environment. One of methods to
realize that is to collect tracing data of guests in a host. To do this, network
is generally used. However, high load will be taken to applications on guests
using network I/O because there are many network stack layers. Therefore,
a communication method for collecting the data without using network is needed.
We submitted a patch set of "IVRing", a ring-buffer driver constructed on
Inter-VM shared memory (IVShmem), to LKML http://lwn.net/Articles/500304/ in
this June. IVRing and the IVRing reader use POSIX shared memory each other
without using network, so a low-overhead system for collecting guest tracing
data is realized. However, this patch set has some problems as follows:
- use IVShmem instead of virtio
- create a new ring-buffer without using existing ring-buffer in kernel
-- not support SMP environment
-- buffer size limitation
-- not support live migration (maybe difficult for realize this)
Therefore, we propose a new system "virtio-trace", which uses enhanced
virtio-serial and existing ring-buffer of ftrace, for collecting guest kernel
tracing data. In this system, there are 5 main components:
(1) Ring-buffer of ftrace in a guest
- When trace agent reads ring-buffer, a page is removed from ring-buffer.
(2) Trace agent in the guest
- Splice the page of ring-buffer to read_pipe using splice() without
memory copying. Then, the page is spliced from write_pipe to virtio
without memory copying.
(3) Virtio-console driver in the guest
- Pass the page to virtio-ring
(4) Virtio-serial bus in QEMU
- Copy the page to kernel pipe
(5) Reader in the host
- Read guest tracing data via FIFO(named pipe)
When a host collects tracing data of a guest, the performance of using
virtio-trace is compared with that of using native(just running ftrace),
IVRing, and virtio-serial(normal method of read/write).
The overview of this evaluation is as follows:
(a) A guest on a KVM is prepared.
- The guest is dedicated one physical CPU as a virtual CPU(VCPU).
(b) The guest starts to write tracing data to ring-buffer of ftrace.
- The probe points are all trace points of sched, timer, and kmem.
(c) Writing trace data, dhrystone 2 in UNIX bench is executed as a benchmark
tool in the guest.
- Dhrystone 2 intends system performance by repeating integer arithmetic
as a score.
- Since higher score equals to better system performance, if the score
decrease based on bare environment, it indicates that any operation
disturbs the integer arithmetic. Then, we define the overhead of
transporting trace data is calculated as follows:
OVERHEAD = (1 - SCORE_OF_A_METHOD/NATIVE_SCORE) * 100.
The performance of each method is compared as follows:
- only recording trace data to ring-buffer on a guest
- running a trace agent on a guest
- a reader on a host opens FIFO using cat command
- A SystemTap script in a guest records trace data to IVRing.
-- probe points are same as ftrace.
- A reader(using cat) on a guest output trace data to a host using
standard output via virtio-serial.
Other information is as follows:
kernel: 3.3.7-1 (Fedora16)
CPU: Intel Xeon firstname.lastname@example.orgGHz(12core)
- guest(only booting one guest)
kernel: 3.5.0-rc4+ (Fedora16)
3 patterns based on the bare environment were indicated as follows:
Scores overhead against  Native
 Native: 28807569.5 -
 Virtio-trace: 28685049.5 0.43%
 IVRing: 28418595.5 1.35%
 Virtio-serial: 13262258.7 53.96%
***Just enhancement ideas***
- Support for trace-cmd
- Support for 9pfs protocol
- Support for non-blocking mode in QEMU
- Make "vhost-serial"
Masami Hiramatsu (5):
virtio/console: Allocate scatterlist according to the current pipe size
ftrace: Allow stealing pages from pipe buffer
virtio/console: Wait until the port is ready on splice
virtio/console: Add a failback for unstealable pipe buffer
virtio/console: Add splice_write support
Yoshihiro YUNOMAE (1):
tools: Add guest trace agent as a user tool
drivers/char/virtio_console.c | 198 ++++++++++++++++++--
kernel/trace/trace.c | 8 -
tools/virtio/virtio-trace/Makefile | 14 +
tools/virtio/virtio-trace/README | 118 ++++++++++++
tools/virtio/virtio-trace/trace-agent-ctl.c | 137 ++++++++++++++
tools/virtio/virtio-trace/trace-agent-rw.c | 192 +++++++++++++++++++
tools/virtio/virtio-trace/trace-agent.c | 270 +++++++++++++++++++++++++++
tools/virtio/virtio-trace/trace-agent.h | 75 ++++++++
8 files changed, 985 insertions(+), 27 deletions(-)
create mode 100644 tools/virtio/virtio-trace/Makefile
create mode 100644 tools/virtio/virtio-trace/README
create mode 100644 tools/virtio/virtio-trace/trace-agent-ctl.c
create mode 100644 tools/virtio/virtio-trace/trace-agent-rw.c
create mode 100644 tools/virtio/virtio-trace/trace-agent.c
create mode 100644 tools/virtio/virtio-trace/trace-agent.h
Software Platform Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory