Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Xen: Devel

[PATCH] qemu/xendisk: set maximum number of grants to be used

 

 

Xen devel RSS feed   Index | Next | Previous | View Threaded


JBeulich at suse

May 11, 2012, 12:19 AM

Post #1 of 4 (128 views)
Permalink
[PATCH] qemu/xendisk: set maximum number of grants to be used

Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <jbeulich [at] suse>

--- a/hw/xen_disk.c
+++ b/hw/xen_disk.c
@@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
if (xen_mode != XEN_EMULATE) {
batch_maps = 1;
}
+ if (xc_gnttab_set_max_grants(xendev->gnttabdev,
+ max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
+ xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
+ strerror(errno));
}

static int blk_init(struct XenDevice *xendev)
Attachments: qemu-xendisk-set-max-grants.patch (0.72 KB)


JBeulich at suse

May 11, 2012, 7:19 AM

Post #2 of 4 (123 views)
Permalink
Re: [PATCH] qemu/xendisk: set maximum number of grants to be used [In reply to]

>>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich [at] suse> wrote:
> Legacy (non-pvops) gntdev drivers may require this to be done when the
> number of grants intended to be used simultaneously exceeds a certain
> driver specific default limit.
>
> Signed-off-by: Jan Beulich <jbeulich [at] suse>
>
> --- a/hw/xen_disk.c
> +++ b/hw/xen_disk.c
> @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
> if (xen_mode != XEN_EMULATE) {
> batch_maps = 1;
> }
> + if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> + max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)

In more extensive testing it appears that very rarely this value is still
too low:

xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)

342 + 11 = 353 > 352 = 32 * 11

Could someone help out here? I first thought this might be due to
use_aio being non-zero, but ioreq_start() doesn't permit more than
max_requests struct ioreqs-s to be around.

Additionally, shouldn't the driver be smarter and gracefully handle
grant mapping failures (as the per-domain map track table in the
hypervisor is a finite resource)?

Jan

> + xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
> + strerror(errno));
> }
>
> static int blk_init(struct XenDevice *xendev)




_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


stefano.stabellini at eu

May 11, 2012, 10:07 AM

Post #3 of 4 (123 views)
Permalink
Re: [PATCH] qemu/xendisk: set maximum number of grants to be used [In reply to]

On Fri, 11 May 2012, Jan Beulich wrote:
> >>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich [at] suse> wrote:
> > Legacy (non-pvops) gntdev drivers may require this to be done when the
> > number of grants intended to be used simultaneously exceeds a certain
> > driver specific default limit.
> >
> > Signed-off-by: Jan Beulich <jbeulich [at] suse>
> >
> > --- a/hw/xen_disk.c
> > +++ b/hw/xen_disk.c
> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
> > if (xen_mode != XEN_EMULATE) {
> > batch_maps = 1;
> > }
> > + if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> > + max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>
> In more extensive testing it appears that very rarely this value is still
> too low:
>
> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>
> 342 + 11 = 353 > 352 = 32 * 11
>
> Could someone help out here? I first thought this might be due to
> use_aio being non-zero, but ioreq_start() doesn't permit more than
> max_requests struct ioreqs-s to be around.

Actually 342 + 11 = 353, that should be still OK because it is equal to
32 * 11 + 1, where the additional 1 is for the ring, right?


> Additionally, shouldn't the driver be smarter and gracefully handle
> grant mapping failures (as the per-domain map track table in the
> hypervisor is a finite resource)?

yes, probably

_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel


JBeulich at suse

May 14, 2012, 12:41 AM

Post #4 of 4 (122 views)
Permalink
Re: [PATCH] qemu/xendisk: set maximum number of grants to be used [In reply to]

>>> On 11.05.12 at 19:07, Stefano Stabellini <stefano.stabellini [at] eu>
wrote:
> On Fri, 11 May 2012, Jan Beulich wrote:
>> >>> On 11.05.12 at 09:19, "Jan Beulich" <JBeulich [at] suse> wrote:
>> > Legacy (non-pvops) gntdev drivers may require this to be done when the
>> > number of grants intended to be used simultaneously exceeds a certain
>> > driver specific default limit.
>> >
>> > Signed-off-by: Jan Beulich <jbeulich [at] suse>
>> >
>> > --- a/hw/xen_disk.c
>> > +++ b/hw/xen_disk.c
>> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
>> > if (xen_mode != XEN_EMULATE) {
>> > batch_maps = 1;
>> > }
>> > + if (xc_gnttab_set_max_grants(xendev->gnttabdev,
>> > + max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>>
>> In more extensive testing it appears that very rarely this value is still
>> too low:
>>
>> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>>
>> 342 + 11 = 353 > 352 = 32 * 11
>>
>> Could someone help out here? I first thought this might be due to
>> use_aio being non-zero, but ioreq_start() doesn't permit more than
>> max_requests struct ioreqs-s to be around.
>
> Actually 342 + 11 = 353, that should be still OK because it is equal to
> 32 * 11 + 1, where the additional 1 is for the ring, right?

The +1 is for the ring, yes. And the calculation in the driver actually
appears to be fine. It's rather an issue with fragmentation afaict -
the driver needs to allocate 11 contiguous slots, and such may not
be available. I'll send out a v2 of the patch soon, taking fragmentation
into account.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel [at] lists
http://lists.xen.org/xen-devel

Xen devel RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.