rulerof at gmail
May 11, 2012, 3:54 PM
Re: gaming on multiple OS of the same machine?
[In reply to]
I've done exactly this, and I can affirm that it kicks ass ;)
I'll answer your questions in line below.
On May 11, 2012, at 6:21 PM, Peter Vandendriessche
<peter.vandendriessche [at] gmail> wrote:
> I am new to Xen and I was wondering if the following construction would be feasible with the current Xen.
> I would like to put 2/3/4 new computers in my house, mainly for gaming. Instead of buying 2/3/4 different computers, I was thinking of building one computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and attach 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA passthrough. This would save me money on hardware, and it would also save quite some space on the desk where I wanted to put them.
> If this is possible, I have a few additional questions about this:
> 1) Would the speed on each virtual machine be effectively that of a 2-core CPU with 1 GPU? What about memory speed/latency?
Make sure that you actually have the cores to give to those DomUs.
Specifically, if you plan on making each guest a dual core machine,
and have 4 guests, get an 8 core chip. The biggest benefit of
virtualization is that it lets you do more with less, but in my
experience, running games will make your guest OS use pretty much
every bit of CPU that it thinks is available. You might be able to
get away with running four dual CPU guests on a six CPU host, but with
frame rate being paramount, I advise against pushing it. With all of
your cores being utilized by guests, there seems, to me, to be "just
enough" left to run the hypervisor/Dom0, combined with whatever RAM it
had left to work with, of course.
> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD 6990 (=4 GPUs in 2 PCI-e slots)?
Alas, no. Not because Xen or IOMMU won't allow it, but because of the
architecture of the 6990. While the individual GPUs /can/ be split up
from the standpoint of PCIe, all of the video outputs are hardwired to
the "primary" GPU. So while it would work in theory, there's nowhere
to plug in the second monitor. Crossfire might work though, but I
haven't tested it personally, and I didn't get any confirmation from
the mailing list when I asked some weeks ago. My own tests are
forthcoming, but it's one of those "when I get the time" kind of
> 3) How should one configure the machine such that each OS receives only the input from its own keyboard/mouse?
The best method I've come up with is to dedicate a single USB
controller to each VM. This may be more difficult than it sounds,
depending on the architecture of your motherboard. Should that be a
limitation, I suggest picking up a Highpoint RocketU 1144A USB3
controller. It provides four USB controllers on one PCIe 4x card,
essentially giving you four different PCIe devices, one for each port,
that can be assigned to individual VMs. Failing that, there's always
USB pass through with GPLPV, but that didn't work for me. YMMV.
> 4) Any other problems or concerns that you can think of?
Not at the moment, but they do exist. This project of mine evolved
over months and involved a lot of research, particularly in the area
of determining what hardware purchase. If you're still only in these
conceptual stages of your build, I may have some suggestions for you
if you like.
Now that I think of it, you'll have the least amount of hassle by
doing "secondary VGA passthrough," which is just assigning a video
card to a vm as you would any other PCIe device. I'll readily admit
that this is nowhere near as cool as primary passthrough, but it
involves the least amount of work.
> Thanks in advance,
> Xen-users mailing list
> Xen-users [at] lists
Best of luck to you!
Xen-users mailing list
Xen-users [at] lists