[RFC][PATCH] Add --split support for dump on filesystem

HATAYAMA Daisuke d.hatayama at jp.fujitsu.com
Thu Mar 27 07:04:55 UTC 2014


From: Vivek Goyal <vgoyal at redhat.com>
Subject: Re: [RFC][PATCH] Add --split support for dump on filesystem
Date: Wed, 26 Mar 2014 14:05:07 -0400

> On Tue, Mar 25, 2014 at 08:08:48PM +0900, HATAYAMA, Daisuke wrote:
>> Hello,
>> 
>> This is an RFC patch intended to first review basic design of --split option support.
>> 
>> This version automatically appends --split option if more than 1 cpu is available on kdump 2nd kernel. I guess someone propably doesn't like the situation that multiple vmcores are generated implicitly without any explicit user operation. So, I'd like comments on this design first.
> 
> Hi Hatayama,
> 
> Can you give some more details about how --split feature of makedumpfile
> works. I have never used it. Why should I split the file into multiple
> files? And how to get back original single file.
> 

crash utility supports vmcores splitted by makedumpfile --split. The
syntax is:

$ crash vmlinux vmcore-0 vmcore-1 ... vmcore-{N-1}

> Also, I don't think we should be adding --split automatically. I want to
> stick to user specified core collector and options and not add things
> silently.
> 
> If user wants to take advantage of parallelism, they need to modify
> nr_cpus and they need to modify core_collector line also and we should
> document it properly.
> 

The problem is that we now don't have a way to specify the number of
parallelism in core_collector since we specify it in --split as the
number of vmcore arguments.

How about this? We do parallel processing if

- in core_collector makedumpfile is specified with --split option, and
- nr_cpus is larger than 1.

i.e., if --split is specified explicitly, we think user intend to do
parallel processing.

I'll post a documentation after design is fixed.

> Also can't we take advatage of parallelism for compression and while
> writing compressed data write it to a single file. That way no special
> configuration will be required and makedumpfile should be able to fork
> as many threads as number of cpus, do the compression and write the
> output to a single file.
> 

First, at least, current makedumpfile cannot do it. To do it, we need
to use pthread; rigorously, it's not but doing it by fork() is
harmful.

Historically, the reason why makedumpfile chose --split was that at
that to avoid increasing initramfs by containing libc.so. (But now
this is no longer a problem since we often include the commands that
link libc.so in initramfs such as scp.)

Also, splitting dump into multiple vmcores has another merit that it's
possible to parallelize even I/O into multiple disks. This is necessay
when we strongly need full dump.

So, doing it is possible. It's easier to do by pthread. I assume the
logic that multiple threads write compressed data into the same buffer
and the thread that detects the buffer is full, flushes the
buffer. But makedumpfile now doesn't have the feature, we need to
newly implement it.

Thanks.
HATAYAMA, Daisuke



More information about the kexec mailing list