Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Xen: Users

gaming on multiple OS of the same machine?

 

 

First page Previous page 1 2 Next page Last page  View All Xen users RSS feed   Index | Next | Previous | View Threaded


peter.vandendriessche at gmail

May 11, 2012, 3:17 PM

Post #1 of 33 (2327 views)
Permalink
gaming on multiple OS of the same machine?

Hi,

I am new to Xen and I was wondering if the following construction would be
feasible with the current Xen.

I would like to put 2/3/4 new computers in my house, mainly for gaming.
Instead of buying 2/3/4 different computers, I was thinking of building one
computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and attach
2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
passthrough. This would save me money on hardware, and it would also save
quite some space on the desk where I wanted to put them.

If this is possible, I have a few additional questions about this:

1) Would the speed on each virtual machine be effectively that of a 2-core
CPU with 1 GPU? What about memory speed/latency?
2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD
6990 (=4 GPUs in 2 PCI-e slots)?
3) How should one configure the machine such that each OS receives only the
input from its own keyboard/mouse?
4) Any other problems or concerns that you can think of?

Thanks in advance,
Peter


cdelorme at gmail

May 11, 2012, 3:51 PM

Post #2 of 33 (2285 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hello Peter,


Question #1: Performance

With x86 Virtualization hardware such as CPU's and Memory are mapped not
layered, there should be almost no difference in speeds from running
natively.

I am running Windows 7 HVM with an ATI Radeon 6870. My system has 12GB of
RAM, and a Core i7 2600. I gave Windows 4 vcores and 6GB of memory,
Windows Experience index gives me 7.5 for CPU and 7.6 for RAM. With VGA
Passthrough I have 7.8 for both graphics scores. I am running all my
systems on LVM partitions on an OCZ Vertex 3 Drive, without PV Drivers
windows scored 6.2 for HDD speeds, with PV drivers it jumped to 7.8.

Scores aside, performance with CPU/RAM is excellent, I am hoping to create
a demo video of my system when I get some time (busy with college).

My biggest concern right now is Disk IO ranges from excellent to abysmal,
but I have a feeling the displayed values and actual speeds might be
different. I'll put putting together an extensive test with this later,
but let's just say IO speeds vary (even with PV drivers). The Disk IO does
not appear to have any affect on games from my experience, so it may only
be write speeds. I have not run any disk benchmarks.


Question #2: GPU Assignment

I have no idea how Dual GPU cards work, so I can't really answer this
question.

I can advise you to be on the lookout for motherboards with NF200 chipsets
or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great bought but
NF200 is completely incompatible with VT-d, ended up with only one PCIe
slot to pass. I can recommend the ASRock Extreme4 Gen3, got it right now,
if I had enough money to buy a bigger PSU and a second GPU I would be doing
what you are planning to.


Question #3: Configuration

Two approaches to device connection, USB Passthrough and PCI Passthrough.
I haven't tried USB Passthrough, but I have a feeling it wouldn't work
with complex devices that require OS drives, such as BlueTooth receivers or
an XBox 360 Wireless adapter.

I took the second approach of passing the USB Controller, but this will
vary by hardware. The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
don't have any idea how you would check this stuff from their manuals, I
found out when I ran "lspci" from Linux Dom0.

I had no luck with USB 3.0, many devices weren't functional when connected
to it, so I left my four USB 3.0 ports to my Dom0, and passed all my USB
2.0 ports.

Again hardware specific, one of the bus had 4 ports, the other had only
two, I bought a 4 port USB PCI plate and attached the additional USB pins
from the board to turn the 2-port into a 6-port controller.

I use a ton of USB devices on my Windows system, Disk IO blows, but
everything else functions great. With PCI Passed USB I am able to use an
XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas of
the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
Wireless Mouse. I had BlueTooth but I got rid of it, the device itself
went bad and was causing my system to BlueScreen.

When I tested USB 3.0, I got no video from my Happauge HD PVR or my
Logitech C910 webcam, and various devices when connected failed to function
right.


Question #4: Other?

I am 100% certain you could get a system running 2 Windows 7 HVM's up for
gaming, but you may need to daisy chain some USB devices if you want more
than just a keyboard and mouse for each.

Also, if you are not confident in your ability to work with *nix, I
wouldn't advise it. I had spent two years tinkering with Web Servers in
Debian, so I thought I would have an easy time of things.

I tried it on a week off, ended up taking me 2 months to complete my setup.
The results are spectacular, but be prepared to spend many hours debugging
unless you find a really good guide.

I would recommend going for a Two Windows on One Rig, and duplicate that
rig for a second machine, and I recommend that for two reasons. If you are
successful with the first machine, you can easily copy the process. This
will save you hours of attempting to get a whole four Gaming machines
working on one system.


As stated, I only run one gaming machine, but I do have two other HVM's
running, one manages my households network and the other is a private
web/file server. So, performance wise Xen can do a lot.

Best of luck,

~Casey

On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche <
peter.vandendriessche [at] gmail> wrote:

> Hi,
>
> I am new to Xen and I was wondering if the following construction would be
> feasible with the current Xen.
>
> I would like to put 2/3/4 new computers in my house, mainly for gaming.
> Instead of buying 2/3/4 different computers, I was thinking of building one
> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and attach
> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
> passthrough. This would save me money on hardware, and it would also save
> quite some space on the desk where I wanted to put them.
>
> If this is possible, I have a few additional questions about this:
>
> 1) Would the speed on each virtual machine be effectively that of a 2-core
> CPU with 1 GPU? What about memory speed/latency?
> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD
> 6990 (=4 GPUs in 2 PCI-e slots)?
> 3) How should one configure the machine such that each OS receives only
> the input from its own keyboard/mouse?
> 4) Any other problems or concerns that you can think of?
>
> Thanks in advance,
> Peter
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users [at] lists
> http://lists.xen.org/xen-users
>


rulerof at gmail

May 11, 2012, 3:54 PM

Post #3 of 33 (2311 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hello Peter,

I've done exactly this, and I can affirm that it kicks ass ;)

I'll answer your questions in line below.

On May 11, 2012, at 6:21 PM, Peter Vandendriessche
<peter.vandendriessche [at] gmail> wrote:

> Hi,
>
> I am new to Xen and I was wondering if the following construction would be feasible with the current Xen.
>
> I would like to put 2/3/4 new computers in my house, mainly for gaming. Instead of buying 2/3/4 different computers, I was thinking of building one computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and attach 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA passthrough. This would save me money on hardware, and it would also save quite some space on the desk where I wanted to put them.
>
> If this is possible, I have a few additional questions about this:
>
> 1) Would the speed on each virtual machine be effectively that of a 2-core CPU with 1 GPU? What about memory speed/latency?

Make sure that you actually have the cores to give to those DomUs.
Specifically, if you plan on making each guest a dual core machine,
and have 4 guests, get an 8 core chip. The biggest benefit of
virtualization is that it lets you do more with less, but in my
experience, running games will make your guest OS use pretty much
every bit of CPU that it thinks is available. You might be able to
get away with running four dual CPU guests on a six CPU host, but with
frame rate being paramount, I advise against pushing it. With all of
your cores being utilized by guests, there seems, to me, to be "just
enough" left to run the hypervisor/Dom0, combined with whatever RAM it
had left to work with, of course.

> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD 6990 (=4 GPUs in 2 PCI-e slots)?

Alas, no. Not because Xen or IOMMU won't allow it, but because of the
architecture of the 6990. While the individual GPUs /can/ be split up
from the standpoint of PCIe, all of the video outputs are hardwired to
the "primary" GPU. So while it would work in theory, there's nowhere
to plug in the second monitor. Crossfire might work though, but I
haven't tested it personally, and I didn't get any confirmation from
the mailing list when I asked some weeks ago. My own tests are
forthcoming, but it's one of those "when I get the time" kind of
things. :P

> 3) How should one configure the machine such that each OS receives only the input from its own keyboard/mouse?

The best method I've come up with is to dedicate a single USB
controller to each VM. This may be more difficult than it sounds,
depending on the architecture of your motherboard. Should that be a
limitation, I suggest picking up a Highpoint RocketU 1144A USB3
controller. It provides four USB controllers on one PCIe 4x card,
essentially giving you four different PCIe devices, one for each port,
that can be assigned to individual VMs. Failing that, there's always
USB pass through with GPLPV, but that didn't work for me. YMMV.


> 4) Any other problems or concerns that you can think of?

Not at the moment, but they do exist. This project of mine evolved
over months and involved a lot of research, particularly in the area
of determining what hardware purchase. If you're still only in these
conceptual stages of your build, I may have some suggestions for you
if you like.

Now that I think of it, you'll have the least amount of hassle by
doing "secondary VGA passthrough," which is just assigning a video
card to a vm as you would any other PCIe device. I'll readily admit
that this is nowhere near as cool as primary passthrough, but it
involves the least amount of work.

> Thanks in advance,
> Peter
>
> _______________________________________________
> Xen-users mailing list
> Xen-users [at] lists
> http://lists.xen.org/xen-users

Best of luck to you!

Cheers,
Andrew Bobulsky

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


rulerof at gmail

May 12, 2012, 10:10 AM

Post #4 of 33 (2288 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hello Casey,

Quick question!

What's the config file entry for the LVM-type setup you have going on
for the guest disk look like? Might you be able to point me to a
guide that'll show me how to set up a disk like that?

Thanks!

-Andrew Bobulsky

On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
> Hello Peter,
>
>
> Question #1: Performance
>
> With x86 Virtualization hardware such as CPU's and Memory are mapped not
> layered, there should be almost no difference in speeds from running
> natively.
>
> I am running Windows 7 HVM with an ATI Radeon 6870.  My system has 12GB of
> RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of memory, Windows
> Experience index gives me 7.5 for CPU and 7.6 for RAM.  With VGA Passthrough
> I have 7.8 for both graphics scores.  I am running all my systems on LVM
> partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored 6.2
> for HDD speeds, with PV drivers it jumped to 7.8.
>
> Scores aside, performance with CPU/RAM is excellent, I am hoping to create a
> demo video of my system when I get some time (busy with college).
>
> My biggest concern right now is Disk IO ranges from excellent to abysmal,
> but I have a feeling the displayed values and actual speeds might be
> different.  I'll put putting together an extensive test with this later, but
> let's just say IO speeds vary (even with PV drivers).  The Disk IO does not
> appear to have any affect on games from my experience, so it may only be
> write speeds.  I have not run any disk benchmarks.
>
>
> Question #2: GPU Assignment
>
> I have no idea how Dual GPU cards work, so I can't really answer this
> question.
>
> I can advise you to be on the lookout for motherboards with NF200 chipsets
> or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great bought but
> NF200 is completely incompatible with VT-d, ended up with only one PCIe slot
> to pass.  I can recommend the ASRock Extreme4 Gen3, got it right now, if I
> had enough money to buy a bigger PSU and a second GPU I would be doing what
> you are planning to.
>
>
> Question #3:  Configuration
>
> Two approaches to device connection, USB Passthrough and PCI Passthrough.  I
> haven't tried USB Passthrough, but I have a feeling it wouldn't work with
> complex devices that require OS drives, such as BlueTooth receivers or an
> XBox 360 Wireless adapter.
>
> I took the second approach of passing the USB Controller, but this will vary
> by hardware.  The ASRock Extreme4 Gen3 has four USB PCI Controllers, I don't
> have any idea how you would check this stuff from their manuals, I found out
> when I ran "lspci" from Linux Dom0.
>
> I had no luck with USB 3.0, many devices weren't functional when connected
> to it, so I left my four USB 3.0 ports to my Dom0, and passed all my USB 2.0
> ports.
>
> Again hardware specific, one of the bus had 4 ports, the other had only two,
> I bought a 4 port USB PCI plate and attached the additional USB pins from
> the board to turn the 2-port into a 6-port controller.
>
> I use a ton of USB devices on my Windows system, Disk IO blows, but
> everything else functions great.  With PCI Passed USB I am able to use an
> XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas of
> the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
> Wireless Mouse.  I had BlueTooth but I got rid of it, the device itself went
> bad and was causing my system to BlueScreen.
>
> When I tested USB 3.0, I got no video from my Happauge HD PVR or my Logitech
> C910 webcam, and various devices when connected failed to function right.
>
>
> Question #4:  Other?
>
> I am 100% certain you could get a system running 2 Windows 7 HVM's up for
> gaming, but you may need to daisy chain some USB devices if you want more
> than just a keyboard and mouse for each.
>
> Also, if you are not confident in your ability to work with *nix, I wouldn't
> advise it.  I had spent two years tinkering with Web Servers in Debian, so I
> thought I would have an easy time of things.
>
> I tried it on a week off, ended up taking me 2 months to complete my setup.
>  The results are spectacular, but be prepared to spend many hours debugging
> unless you find a really good guide.
>
> I would recommend going for a Two Windows on One Rig, and duplicate that rig
> for a second machine, and I recommend that for two reasons.  If you are
> successful with the first machine, you can easily copy the process.  This
> will save you hours of attempting to get a whole four Gaming machines
> working on one system.
>
>
> As stated, I only run one gaming machine, but I do have two other HVM's
> running, one manages my households network and the other is a private
> web/file server.  So, performance wise Xen can do a lot.
>
> Best of luck,
>
> ~Casey
>
> On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
> <peter.vandendriessche [at] gmail> wrote:
>>
>> Hi,
>>
>> I am new to Xen and I was wondering if the following construction would be
>> feasible with the current Xen.
>>
>> I would like to put 2/3/4 new computers in my house, mainly for gaming.
>> Instead of buying 2/3/4 different computers, I was thinking of building one
>> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and attach
>> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
>> passthrough. This would save me money on hardware, and it would also save
>> quite some space on the desk where I wanted to put them.
>>
>> If this is possible, I have a few additional questions about this:
>>
>> 1) Would the speed on each virtual machine be effectively that of a 2-core
>> CPU with 1 GPU? What about memory speed/latency?
>> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD
>> 6990 (=4 GPUs in 2 PCI-e slots)?
>> 3) How should one configure the machine such that each OS receives only
>> the input from its own keyboard/mouse?
>> 4) Any other problems or concerns that you can think of?
>>
>> Thanks in advance,
>> Peter
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users [at] lists
>> http://lists.xen.org/xen-users
>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users [at] lists
> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


cdelorme at gmail

May 12, 2012, 10:48 AM

Post #5 of 33 (2285 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hi Andrew,

You mean the Windows DomU configuration, right? I put it up on pastebin
here along with a couple other configuration files:
http://pastebin.com/9E1g1BHf

I'm just using normal LV partitions and passing them to an HVM, there is no
special trick so any LVM guide should put you on the right track.

I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.

**********

The only convoluted part is my Dom0 installation, since I used EFI boot and
an LV to store root (/), so I have two 256MB partitions, one FAT32 for EFI,
one Ext4 for boot (/boot) and then the rest of the disk to LVM. I did the
LVM setup right in the installation, added the SSD partition (PV) to a
Volume Group (VG) then threw on a few partitions.

I created a linux root partition of 8GB, a home partition of 20GB, and a
swap partition of 2GB. I mapped those in the configuration, then I went on
ahead and made a 140GB partition for windows, and two 4GB partitions for
PFSense and NGinx.

Once the system is installed, the standard LVM tools can be used, lvcreate,
lvresize, lvremove, lv/vg/pvdisplay commands, etc...

My Disk IO is not optimal, which might be because I run four systems off
the same drive at the same time, so if you intend to use many systems you
may want to split the drives onto multiple physical disks. However, I have
reason to believe my IO problems are a Xen bug, I just haven't had time to
test/prove it.

**********

When you pass a LV to an HVM it treats it like a physical disk, and it will
create a partition table, MBR code, and partitions inside the LV
(partitions within partitions).

When I get some free time I want to write up a pretty verbose guide on LVM
specifically for Xen, there are plenty of things I've learned about
accessing the partitions too.

Some things I learned recently with Xen, IDE drives (hdX) only allow four
passed devices, so if you have more than 3 storage partitions you will want
to use SCSI (sdX) for them, but SCSI drives are not bootable. Hence my
configuration has "hda" for the boot drive (lv partition), and sdX for all
storage drives (lv partitons) (X = alphabetical increment, a, b, c, d, etc).

**********

Hope that helps a bit, let me know if you have any other questions or if
that didn't answer them correct.

~Casey


On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:

> Hello Casey,
>
> Quick question!
>
> What's the config file entry for the LVM-type setup you have going on
> for the guest disk look like? Might you be able to point me to a
> guide that'll show me how to set up a disk like that?
>
> Thanks!
>
> -Andrew Bobulsky
>
> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
> > Hello Peter,
> >
> >
> > Question #1: Performance
> >
> > With x86 Virtualization hardware such as CPU's and Memory are mapped not
> > layered, there should be almost no difference in speeds from running
> > natively.
> >
> > I am running Windows 7 HVM with an ATI Radeon 6870. My system has 12GB
> of
> > RAM, and a Core i7 2600. I gave Windows 4 vcores and 6GB of memory,
> Windows
> > Experience index gives me 7.5 for CPU and 7.6 for RAM. With VGA
> Passthrough
> > I have 7.8 for both graphics scores. I am running all my systems on LVM
> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored
> 6.2
> > for HDD speeds, with PV drivers it jumped to 7.8.
> >
> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
> create a
> > demo video of my system when I get some time (busy with college).
> >
> > My biggest concern right now is Disk IO ranges from excellent to abysmal,
> > but I have a feeling the displayed values and actual speeds might be
> > different. I'll put putting together an extensive test with this later,
> but
> > let's just say IO speeds vary (even with PV drivers). The Disk IO does
> not
> > appear to have any affect on games from my experience, so it may only be
> > write speeds. I have not run any disk benchmarks.
> >
> >
> > Question #2: GPU Assignment
> >
> > I have no idea how Dual GPU cards work, so I can't really answer this
> > question.
> >
> > I can advise you to be on the lookout for motherboards with NF200
> chipsets
> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great bought
> but
> > NF200 is completely incompatible with VT-d, ended up with only one PCIe
> slot
> > to pass. I can recommend the ASRock Extreme4 Gen3, got it right now, if
> I
> > had enough money to buy a bigger PSU and a second GPU I would be doing
> what
> > you are planning to.
> >
> >
> > Question #3: Configuration
> >
> > Two approaches to device connection, USB Passthrough and PCI
> Passthrough. I
> > haven't tried USB Passthrough, but I have a feeling it wouldn't work with
> > complex devices that require OS drives, such as BlueTooth receivers or an
> > XBox 360 Wireless adapter.
> >
> > I took the second approach of passing the USB Controller, but this will
> vary
> > by hardware. The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
> don't
> > have any idea how you would check this stuff from their manuals, I found
> out
> > when I ran "lspci" from Linux Dom0.
> >
> > I had no luck with USB 3.0, many devices weren't functional when
> connected
> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my USB
> 2.0
> > ports.
> >
> > Again hardware specific, one of the bus had 4 ports, the other had only
> two,
> > I bought a 4 port USB PCI plate and attached the additional USB pins from
> > the board to turn the 2-port into a 6-port controller.
> >
> > I use a ton of USB devices on my Windows system, Disk IO blows, but
> > everything else functions great. With PCI Passed USB I am able to use an
> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas of
> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
> > Wireless Mouse. I had BlueTooth but I got rid of it, the device itself
> went
> > bad and was causing my system to BlueScreen.
> >
> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
> Logitech
> > C910 webcam, and various devices when connected failed to function right.
> >
> >
> > Question #4: Other?
> >
> > I am 100% certain you could get a system running 2 Windows 7 HVM's up for
> > gaming, but you may need to daisy chain some USB devices if you want more
> > than just a keyboard and mouse for each.
> >
> > Also, if you are not confident in your ability to work with *nix, I
> wouldn't
> > advise it. I had spent two years tinkering with Web Servers in Debian,
> so I
> > thought I would have an easy time of things.
> >
> > I tried it on a week off, ended up taking me 2 months to complete my
> setup.
> > The results are spectacular, but be prepared to spend many hours
> debugging
> > unless you find a really good guide.
> >
> > I would recommend going for a Two Windows on One Rig, and duplicate that
> rig
> > for a second machine, and I recommend that for two reasons. If you are
> > successful with the first machine, you can easily copy the process. This
> > will save you hours of attempting to get a whole four Gaming machines
> > working on one system.
> >
> >
> > As stated, I only run one gaming machine, but I do have two other HVM's
> > running, one manages my households network and the other is a private
> > web/file server. So, performance wise Xen can do a lot.
> >
> > Best of luck,
> >
> > ~Casey
> >
> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
> > <peter.vandendriessche [at] gmail> wrote:
> >>
> >> Hi,
> >>
> >> I am new to Xen and I was wondering if the following construction would
> be
> >> feasible with the current Xen.
> >>
> >> I would like to put 2/3/4 new computers in my house, mainly for gaming.
> >> Instead of buying 2/3/4 different computers, I was thinking of building
> one
> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and attach
> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
> >> passthrough. This would save me money on hardware, and it would also
> save
> >> quite some space on the desk where I wanted to put them.
> >>
> >> If this is possible, I have a few additional questions about this:
> >>
> >> 1) Would the speed on each virtual machine be effectively that of a
> 2-core
> >> CPU with 1 GPU? What about memory speed/latency?
> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon
> HD
> >> 6990 (=4 GPUs in 2 PCI-e slots)?
> >> 3) How should one configure the machine such that each OS receives only
> >> the input from its own keyboard/mouse?
> >> 4) Any other problems or concerns that you can think of?
> >>
> >> Thanks in advance,
> >> Peter
> >>
> >>
> >> _______________________________________________
> >> Xen-users mailing list
> >> Xen-users [at] lists
> >> http://lists.xen.org/xen-users
> >
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users [at] lists
> > http://lists.xen.org/xen-users
>


rulerof at gmail

May 12, 2012, 11:03 AM

Post #6 of 33 (2297 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

That's Excellent!

Thanks for that info, it is *very* helpful.

I'm currently having a problem where, after installing the GPLPV
drivers (from here:
http://wiki.univention.de/index.php?title=Installing-signed-GPLPV-drivers
), my system BSODs during winload on atikmpag.sys.

You're running GPLPV... are you running all of the drivers, or just select ones?


On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
> Hi Andrew,
>
> You mean the Windows DomU configuration, right?  I put it up on pastebin
> here along with a couple other configuration files:
> http://pastebin.com/9E1g1BHf
>
> I'm just using normal LV partitions and passing them to an HVM, there is no
> special trick so any LVM guide should put you on the right track.
>
> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>
> **********
>
> The only convoluted part is my Dom0 installation, since I used EFI boot and
> an LV to store root (/), so I have two 256MB partitions, one FAT32 for EFI,
> one Ext4 for boot (/boot) and then the rest of the disk to LVM.  I did the
> LVM setup right in the installation, added the SSD partition (PV) to a
> Volume Group (VG) then threw on a few partitions.
>
> I created a linux root partition of 8GB, a home partition of 20GB, and a
> swap partition of 2GB.  I mapped those in the configuration, then I went on
> ahead and made a 140GB partition for windows, and two 4GB partitions for
> PFSense and NGinx.
>
> Once the system is installed, the standard LVM tools can be used, lvcreate,
> lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>
> My Disk IO is not optimal, which might be because I run four systems off the
> same drive at the same time, so if you intend to use many systems you may
> want to split the drives onto multiple physical disks.  However, I have
> reason to believe my IO problems are a Xen bug, I just haven't had time to
> test/prove it.
>
> **********
>
> When you pass a LV to an HVM it treats it like a physical disk, and it will
> create a partition table, MBR code, and partitions inside the LV (partitions
> within partitions).
>
> When I get some free time I want to write up a pretty verbose guide on LVM
> specifically for Xen, there are plenty of things I've learned about
> accessing the partitions too.
>
> Some things I learned recently with Xen, IDE drives (hdX) only allow four
> passed devices, so if you have more than 3 storage partitions you will want
> to use SCSI (sdX) for them, but SCSI drives are not bootable.  Hence my
> configuration has "hda" for the boot drive (lv partition), and sdX for all
> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d, etc).
>
> **********
>
> Hope that helps a bit, let me know if you have any other questions or if
> that didn't answer them correct.
>
> ~Casey
>
>
> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:
>>
>> Hello Casey,
>>
>> Quick question!
>>
>> What's the config file entry for the LVM-type setup you have going on
>> for the guest disk look like?  Might you be able to point me to a
>> guide that'll show me how to set up a disk like that?
>>
>> Thanks!
>>
>> -Andrew Bobulsky
>>
>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>> > Hello Peter,
>> >
>> >
>> > Question #1: Performance
>> >
>> > With x86 Virtualization hardware such as CPU's and Memory are mapped not
>> > layered, there should be almost no difference in speeds from running
>> > natively.
>> >
>> > I am running Windows 7 HVM with an ATI Radeon 6870.  My system has 12GB
>> > of
>> > RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of memory,
>> > Windows
>> > Experience index gives me 7.5 for CPU and 7.6 for RAM.  With VGA
>> > Passthrough
>> > I have 7.8 for both graphics scores.  I am running all my systems on LVM
>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored
>> > 6.2
>> > for HDD speeds, with PV drivers it jumped to 7.8.
>> >
>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>> > create a
>> > demo video of my system when I get some time (busy with college).
>> >
>> > My biggest concern right now is Disk IO ranges from excellent to
>> > abysmal,
>> > but I have a feeling the displayed values and actual speeds might be
>> > different.  I'll put putting together an extensive test with this later,
>> > but
>> > let's just say IO speeds vary (even with PV drivers).  The Disk IO does
>> > not
>> > appear to have any affect on games from my experience, so it may only be
>> > write speeds.  I have not run any disk benchmarks.
>> >
>> >
>> > Question #2: GPU Assignment
>> >
>> > I have no idea how Dual GPU cards work, so I can't really answer this
>> > question.
>> >
>> > I can advise you to be on the lookout for motherboards with NF200
>> > chipsets
>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great bought
>> > but
>> > NF200 is completely incompatible with VT-d, ended up with only one PCIe
>> > slot
>> > to pass.  I can recommend the ASRock Extreme4 Gen3, got it right now, if
>> > I
>> > had enough money to buy a bigger PSU and a second GPU I would be doing
>> > what
>> > you are planning to.
>> >
>> >
>> > Question #3:  Configuration
>> >
>> > Two approaches to device connection, USB Passthrough and PCI
>> > Passthrough.  I
>> > haven't tried USB Passthrough, but I have a feeling it wouldn't work
>> > with
>> > complex devices that require OS drives, such as BlueTooth receivers or
>> > an
>> > XBox 360 Wireless adapter.
>> >
>> > I took the second approach of passing the USB Controller, but this will
>> > vary
>> > by hardware.  The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
>> > don't
>> > have any idea how you would check this stuff from their manuals, I found
>> > out
>> > when I ran "lspci" from Linux Dom0.
>> >
>> > I had no luck with USB 3.0, many devices weren't functional when
>> > connected
>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my USB
>> > 2.0
>> > ports.
>> >
>> > Again hardware specific, one of the bus had 4 ports, the other had only
>> > two,
>> > I bought a 4 port USB PCI plate and attached the additional USB pins
>> > from
>> > the board to turn the 2-port into a 6-port controller.
>> >
>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>> > everything else functions great.  With PCI Passed USB I am able to use
>> > an
>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas
>> > of
>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
>> > Wireless Mouse.  I had BlueTooth but I got rid of it, the device itself
>> > went
>> > bad and was causing my system to BlueScreen.
>> >
>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>> > Logitech
>> > C910 webcam, and various devices when connected failed to function
>> > right.
>> >
>> >
>> > Question #4:  Other?
>> >
>> > I am 100% certain you could get a system running 2 Windows 7 HVM's up
>> > for
>> > gaming, but you may need to daisy chain some USB devices if you want
>> > more
>> > than just a keyboard and mouse for each.
>> >
>> > Also, if you are not confident in your ability to work with *nix, I
>> > wouldn't
>> > advise it.  I had spent two years tinkering with Web Servers in Debian,
>> > so I
>> > thought I would have an easy time of things.
>> >
>> > I tried it on a week off, ended up taking me 2 months to complete my
>> > setup.
>> >  The results are spectacular, but be prepared to spend many hours
>> > debugging
>> > unless you find a really good guide.
>> >
>> > I would recommend going for a Two Windows on One Rig, and duplicate that
>> > rig
>> > for a second machine, and I recommend that for two reasons.  If you are
>> > successful with the first machine, you can easily copy the process.
>> >  This
>> > will save you hours of attempting to get a whole four Gaming machines
>> > working on one system.
>> >
>> >
>> > As stated, I only run one gaming machine, but I do have two other HVM's
>> > running, one manages my households network and the other is a private
>> > web/file server.  So, performance wise Xen can do a lot.
>> >
>> > Best of luck,
>> >
>> > ~Casey
>> >
>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>> > <peter.vandendriessche [at] gmail> wrote:
>> >>
>> >> Hi,
>> >>
>> >> I am new to Xen and I was wondering if the following construction would
>> >> be
>> >> feasible with the current Xen.
>> >>
>> >> I would like to put 2/3/4 new computers in my house, mainly for gaming.
>> >> Instead of buying 2/3/4 different computers, I was thinking of building
>> >> one
>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>> >> attach
>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
>> >> passthrough. This would save me money on hardware, and it would also
>> >> save
>> >> quite some space on the desk where I wanted to put them.
>> >>
>> >> If this is possible, I have a few additional questions about this:
>> >>
>> >> 1) Would the speed on each virtual machine be effectively that of a
>> >> 2-core
>> >> CPU with 1 GPU? What about memory speed/latency?
>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon
>> >> HD
>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>> >> 3) How should one configure the machine such that each OS receives only
>> >> the input from its own keyboard/mouse?
>> >> 4) Any other problems or concerns that you can think of?
>> >>
>> >> Thanks in advance,
>> >> Peter
>> >>
>> >>
>> >> _______________________________________________
>> >> Xen-users mailing list
>> >> Xen-users [at] lists
>> >> http://lists.xen.org/xen-users
>> >
>> >
>> >
>> > _______________________________________________
>> > Xen-users mailing list
>> > Xen-users [at] lists
>> > http://lists.xen.org/xen-users
>
>

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


tknchris at gmail

May 12, 2012, 11:10 AM

Post #7 of 33 (2288 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

kpartx being one of them! awesome tool for lvm backed domU's

On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail> wrote:

> Hi Andrew,
>
> You mean the Windows DomU configuration, right? I put it up on pastebin
> here along with a couple other configuration files:
> http://pastebin.com/9E1g1BHf
>
> I'm just using normal LV partitions and passing them to an HVM, there is
> no special trick so any LVM guide should put you on the right track.
>
> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>
> **********
>
> The only convoluted part is my Dom0 installation, since I used EFI boot
> and an LV to store root (/), so I have two 256MB partitions, one FAT32 for
> EFI, one Ext4 for boot (/boot) and then the rest of the disk to LVM. I did
> the LVM setup right in the installation, added the SSD partition (PV) to a
> Volume Group (VG) then threw on a few partitions.
>
> I created a linux root partition of 8GB, a home partition of 20GB, and a
> swap partition of 2GB. I mapped those in the configuration, then I went on
> ahead and made a 140GB partition for windows, and two 4GB partitions for
> PFSense and NGinx.
>
> Once the system is installed, the standard LVM tools can be used,
> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>
> My Disk IO is not optimal, which might be because I run four systems off
> the same drive at the same time, so if you intend to use many systems you
> may want to split the drives onto multiple physical disks. However, I have
> reason to believe my IO problems are a Xen bug, I just haven't had time to
> test/prove it.
>
> **********
>
> When you pass a LV to an HVM it treats it like a physical disk, and it
> will create a partition table, MBR code, and partitions inside the LV
> (partitions within partitions).
>
> When I get some free time I want to write up a pretty verbose guide on LVM
> specifically for Xen, there are plenty of things I've learned about
> accessing the partitions too.
>
> Some things I learned recently with Xen, IDE drives (hdX) only allow four
> passed devices, so if you have more than 3 storage partitions you will want
> to use SCSI (sdX) for them, but SCSI drives are not bootable. Hence my
> configuration has "hda" for the boot drive (lv partition), and sdX for all
> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d, etc).
>
> **********
>
> Hope that helps a bit, let me know if you have any other questions or if
> that didn't answer them correct.
>
> ~Casey
>
>
> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail>wrote:
>
>> Hello Casey,
>>
>> Quick question!
>>
>> What's the config file entry for the LVM-type setup you have going on
>> for the guest disk look like? Might you be able to point me to a
>> guide that'll show me how to set up a disk like that?
>>
>> Thanks!
>>
>> -Andrew Bobulsky
>>
>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail>
>> wrote:
>> > Hello Peter,
>> >
>> >
>> > Question #1: Performance
>> >
>> > With x86 Virtualization hardware such as CPU's and Memory are mapped not
>> > layered, there should be almost no difference in speeds from running
>> > natively.
>> >
>> > I am running Windows 7 HVM with an ATI Radeon 6870. My system has 12GB
>> of
>> > RAM, and a Core i7 2600. I gave Windows 4 vcores and 6GB of memory,
>> Windows
>> > Experience index gives me 7.5 for CPU and 7.6 for RAM. With VGA
>> Passthrough
>> > I have 7.8 for both graphics scores. I am running all my systems on LVM
>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored
>> 6.2
>> > for HDD speeds, with PV drivers it jumped to 7.8.
>> >
>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>> create a
>> > demo video of my system when I get some time (busy with college).
>> >
>> > My biggest concern right now is Disk IO ranges from excellent to
>> abysmal,
>> > but I have a feeling the displayed values and actual speeds might be
>> > different. I'll put putting together an extensive test with this
>> later, but
>> > let's just say IO speeds vary (even with PV drivers). The Disk IO does
>> not
>> > appear to have any affect on games from my experience, so it may only be
>> > write speeds. I have not run any disk benchmarks.
>> >
>> >
>> > Question #2: GPU Assignment
>> >
>> > I have no idea how Dual GPU cards work, so I can't really answer this
>> > question.
>> >
>> > I can advise you to be on the lookout for motherboards with NF200
>> chipsets
>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great bought
>> but
>> > NF200 is completely incompatible with VT-d, ended up with only one PCIe
>> slot
>> > to pass. I can recommend the ASRock Extreme4 Gen3, got it right now,
>> if I
>> > had enough money to buy a bigger PSU and a second GPU I would be doing
>> what
>> > you are planning to.
>> >
>> >
>> > Question #3: Configuration
>> >
>> > Two approaches to device connection, USB Passthrough and PCI
>> Passthrough. I
>> > haven't tried USB Passthrough, but I have a feeling it wouldn't work
>> with
>> > complex devices that require OS drives, such as BlueTooth receivers or
>> an
>> > XBox 360 Wireless adapter.
>> >
>> > I took the second approach of passing the USB Controller, but this will
>> vary
>> > by hardware. The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
>> don't
>> > have any idea how you would check this stuff from their manuals, I
>> found out
>> > when I ran "lspci" from Linux Dom0.
>> >
>> > I had no luck with USB 3.0, many devices weren't functional when
>> connected
>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my
>> USB 2.0
>> > ports.
>> >
>> > Again hardware specific, one of the bus had 4 ports, the other had only
>> two,
>> > I bought a 4 port USB PCI plate and attached the additional USB pins
>> from
>> > the board to turn the 2-port into a 6-port controller.
>> >
>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>> > everything else functions great. With PCI Passed USB I am able to use
>> an
>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas
>> of
>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
>> > Wireless Mouse. I had BlueTooth but I got rid of it, the device itself
>> went
>> > bad and was causing my system to BlueScreen.
>> >
>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>> Logitech
>> > C910 webcam, and various devices when connected failed to function
>> right.
>> >
>> >
>> > Question #4: Other?
>> >
>> > I am 100% certain you could get a system running 2 Windows 7 HVM's up
>> for
>> > gaming, but you may need to daisy chain some USB devices if you want
>> more
>> > than just a keyboard and mouse for each.
>> >
>> > Also, if you are not confident in your ability to work with *nix, I
>> wouldn't
>> > advise it. I had spent two years tinkering with Web Servers in Debian,
>> so I
>> > thought I would have an easy time of things.
>> >
>> > I tried it on a week off, ended up taking me 2 months to complete my
>> setup.
>> > The results are spectacular, but be prepared to spend many hours
>> debugging
>> > unless you find a really good guide.
>> >
>> > I would recommend going for a Two Windows on One Rig, and duplicate
>> that rig
>> > for a second machine, and I recommend that for two reasons. If you are
>> > successful with the first machine, you can easily copy the process.
>> This
>> > will save you hours of attempting to get a whole four Gaming machines
>> > working on one system.
>> >
>> >
>> > As stated, I only run one gaming machine, but I do have two other HVM's
>> > running, one manages my households network and the other is a private
>> > web/file server. So, performance wise Xen can do a lot.
>> >
>> > Best of luck,
>> >
>> > ~Casey
>> >
>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>> > <peter.vandendriessche [at] gmail> wrote:
>> >>
>> >> Hi,
>> >>
>> >> I am new to Xen and I was wondering if the following construction
>> would be
>> >> feasible with the current Xen.
>> >>
>> >> I would like to put 2/3/4 new computers in my house, mainly for gaming.
>> >> Instead of buying 2/3/4 different computers, I was thinking of
>> building one
>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>> attach
>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
>> >> passthrough. This would save me money on hardware, and it would also
>> save
>> >> quite some space on the desk where I wanted to put them.
>> >>
>> >> If this is possible, I have a few additional questions about this:
>> >>
>> >> 1) Would the speed on each virtual machine be effectively that of a
>> 2-core
>> >> CPU with 1 GPU? What about memory speed/latency?
>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon
>> HD
>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>> >> 3) How should one configure the machine such that each OS receives only
>> >> the input from its own keyboard/mouse?
>> >> 4) Any other problems or concerns that you can think of?
>> >>
>> >> Thanks in advance,
>> >> Peter
>> >>
>> >>
>> >> _______________________________________________
>> >> Xen-users mailing list
>> >> Xen-users [at] lists
>> >> http://lists.xen.org/xen-users
>> >
>> >
>> >
>> > _______________________________________________
>> > Xen-users mailing list
>> > Xen-users [at] lists
>> > http://lists.xen.org/xen-users
>>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users [at] lists
> http://lists.xen.org/xen-users
>


cdelorme at gmail

May 12, 2012, 11:19 AM

Post #8 of 33 (2295 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Andrew,

I hate that error with a passion, but the good news is I may have figured
out exactly what causes it thanks to hundreds of encounters and some
insight from Tobias Geiger's posts on VGA Performance Degradation.

First, it isn't the GPLPV drivers, it's your ATI card/drivers.

You may have noticed that the first boot of your system your ATI card
performs optimally in Windows, well when you reboot windows and not the
whole Xen system, the GPU does not get reset.

It has been speculated that this is an FLR bug or perhaps more specifically
a Windows FLR bug.

The solution, at boot time go to the USB Safe Ejection option, and eject
the card. Your screen goes black for 1-3 seconds and it automatically
reinstalls. This is essentially a forced FLR, and will fix the performance
issues... at least until you reboot windows again.



My Solution(s) to Atikmpag.sys errors:

I encountered this bug in two very specific instances.

A) If I was using a buggy device, in my case my BlueTooth adapter was
dying and I didn't realize it until over a week of failed testing. The
buggy BlueTooth device was causing ATI's drivers to freak, how they are
related is beyond me. In conclusion, try unplugging any extra devices when
testing.

B) When you install your ATI drivers, you need to do so on first boot so
the card is fresh. If you reboot Windows and not the whole machine before
trying to install the ATI drivers, the card hasn't been "reset" and either
the installation will BSOD or if you are successful the drivers are almost
certainly bugged and you will have problems in the future. My solution,
reboot Xen before installing ATI drivers. OR! Use the USB Safe Device
removal and then install them.


To fix your BSOD you may have to safe mode reboot, uninstall the ATI
drivers, reboot the entire computer (Xen), and then try again.


Also, if you install the Windows Update ATI drivers, you're essentially
screwed since it will automatically reinstall them every boot, which means
before you can eject the device to force FLR. The only workaround I have
found for this is to reinstall Windows. If anyone knows how to tell
Windows to "really" delete an installed driver that would be fabulous, but
just the checkbox on device uninstall doesn't do it when you install the
Windows Update driver.

Hope that helps with a few things, let me know if I wasn't clear (It's a
confusing topic to begin with).

~Casey

On Sat, May 12, 2012 at 2:10 PM, chris <tknchris [at] gmail> wrote:

> kpartx being one of them! awesome tool for lvm backed domU's
>
>
> On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>
>> Hi Andrew,
>>
>> You mean the Windows DomU configuration, right? I put it up on pastebin
>> here along with a couple other configuration files:
>> http://pastebin.com/9E1g1BHf
>>
>> I'm just using normal LV partitions and passing them to an HVM, there is
>> no special trick so any LVM guide should put you on the right track.
>>
>> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>>
>> **********
>>
>> The only convoluted part is my Dom0 installation, since I used EFI boot
>> and an LV to store root (/), so I have two 256MB partitions, one FAT32 for
>> EFI, one Ext4 for boot (/boot) and then the rest of the disk to LVM. I did
>> the LVM setup right in the installation, added the SSD partition (PV) to a
>> Volume Group (VG) then threw on a few partitions.
>>
>> I created a linux root partition of 8GB, a home partition of 20GB, and a
>> swap partition of 2GB. I mapped those in the configuration, then I went on
>> ahead and made a 140GB partition for windows, and two 4GB partitions for
>> PFSense and NGinx.
>>
>> Once the system is installed, the standard LVM tools can be used,
>> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>>
>> My Disk IO is not optimal, which might be because I run four systems off
>> the same drive at the same time, so if you intend to use many systems you
>> may want to split the drives onto multiple physical disks. However, I have
>> reason to believe my IO problems are a Xen bug, I just haven't had time to
>> test/prove it.
>>
>> **********
>>
>> When you pass a LV to an HVM it treats it like a physical disk, and it
>> will create a partition table, MBR code, and partitions inside the LV
>> (partitions within partitions).
>>
>> When I get some free time I want to write up a pretty verbose guide on
>> LVM specifically for Xen, there are plenty of things I've learned about
>> accessing the partitions too.
>>
>> Some things I learned recently with Xen, IDE drives (hdX) only allow four
>> passed devices, so if you have more than 3 storage partitions you will want
>> to use SCSI (sdX) for them, but SCSI drives are not bootable. Hence my
>> configuration has "hda" for the boot drive (lv partition), and sdX for all
>> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d, etc).
>>
>> **********
>>
>> Hope that helps a bit, let me know if you have any other questions or if
>> that didn't answer them correct.
>>
>> ~Casey
>>
>>
>> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail>wrote:
>>
>>> Hello Casey,
>>>
>>> Quick question!
>>>
>>> What's the config file entry for the LVM-type setup you have going on
>>> for the guest disk look like? Might you be able to point me to a
>>> guide that'll show me how to set up a disk like that?
>>>
>>> Thanks!
>>>
>>> -Andrew Bobulsky
>>>
>>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail>
>>> wrote:
>>> > Hello Peter,
>>> >
>>> >
>>> > Question #1: Performance
>>> >
>>> > With x86 Virtualization hardware such as CPU's and Memory are mapped
>>> not
>>> > layered, there should be almost no difference in speeds from running
>>> > natively.
>>> >
>>> > I am running Windows 7 HVM with an ATI Radeon 6870. My system has
>>> 12GB of
>>> > RAM, and a Core i7 2600. I gave Windows 4 vcores and 6GB of memory,
>>> Windows
>>> > Experience index gives me 7.5 for CPU and 7.6 for RAM. With VGA
>>> Passthrough
>>> > I have 7.8 for both graphics scores. I am running all my systems on
>>> LVM
>>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored
>>> 6.2
>>> > for HDD speeds, with PV drivers it jumped to 7.8.
>>> >
>>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>>> create a
>>> > demo video of my system when I get some time (busy with college).
>>> >
>>> > My biggest concern right now is Disk IO ranges from excellent to
>>> abysmal,
>>> > but I have a feeling the displayed values and actual speeds might be
>>> > different. I'll put putting together an extensive test with this
>>> later, but
>>> > let's just say IO speeds vary (even with PV drivers). The Disk IO
>>> does not
>>> > appear to have any affect on games from my experience, so it may only
>>> be
>>> > write speeds. I have not run any disk benchmarks.
>>> >
>>> >
>>> > Question #2: GPU Assignment
>>> >
>>> > I have no idea how Dual GPU cards work, so I can't really answer this
>>> > question.
>>> >
>>> > I can advise you to be on the lookout for motherboards with NF200
>>> chipsets
>>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great
>>> bought but
>>> > NF200 is completely incompatible with VT-d, ended up with only one
>>> PCIe slot
>>> > to pass. I can recommend the ASRock Extreme4 Gen3, got it right now,
>>> if I
>>> > had enough money to buy a bigger PSU and a second GPU I would be doing
>>> what
>>> > you are planning to.
>>> >
>>> >
>>> > Question #3: Configuration
>>> >
>>> > Two approaches to device connection, USB Passthrough and PCI
>>> Passthrough. I
>>> > haven't tried USB Passthrough, but I have a feeling it wouldn't work
>>> with
>>> > complex devices that require OS drives, such as BlueTooth receivers or
>>> an
>>> > XBox 360 Wireless adapter.
>>> >
>>> > I took the second approach of passing the USB Controller, but this
>>> will vary
>>> > by hardware. The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
>>> don't
>>> > have any idea how you would check this stuff from their manuals, I
>>> found out
>>> > when I ran "lspci" from Linux Dom0.
>>> >
>>> > I had no luck with USB 3.0, many devices weren't functional when
>>> connected
>>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my
>>> USB 2.0
>>> > ports.
>>> >
>>> > Again hardware specific, one of the bus had 4 ports, the other had
>>> only two,
>>> > I bought a 4 port USB PCI plate and attached the additional USB pins
>>> from
>>> > the board to turn the 2-port into a 6-port controller.
>>> >
>>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>>> > everything else functions great. With PCI Passed USB I am able to use
>>> an
>>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas
>>> of
>>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
>>> > Wireless Mouse. I had BlueTooth but I got rid of it, the device
>>> itself went
>>> > bad and was causing my system to BlueScreen.
>>> >
>>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>>> Logitech
>>> > C910 webcam, and various devices when connected failed to function
>>> right.
>>> >
>>> >
>>> > Question #4: Other?
>>> >
>>> > I am 100% certain you could get a system running 2 Windows 7 HVM's up
>>> for
>>> > gaming, but you may need to daisy chain some USB devices if you want
>>> more
>>> > than just a keyboard and mouse for each.
>>> >
>>> > Also, if you are not confident in your ability to work with *nix, I
>>> wouldn't
>>> > advise it. I had spent two years tinkering with Web Servers in
>>> Debian, so I
>>> > thought I would have an easy time of things.
>>> >
>>> > I tried it on a week off, ended up taking me 2 months to complete my
>>> setup.
>>> > The results are spectacular, but be prepared to spend many hours
>>> debugging
>>> > unless you find a really good guide.
>>> >
>>> > I would recommend going for a Two Windows on One Rig, and duplicate
>>> that rig
>>> > for a second machine, and I recommend that for two reasons. If you are
>>> > successful with the first machine, you can easily copy the process.
>>> This
>>> > will save you hours of attempting to get a whole four Gaming machines
>>> > working on one system.
>>> >
>>> >
>>> > As stated, I only run one gaming machine, but I do have two other HVM's
>>> > running, one manages my households network and the other is a private
>>> > web/file server. So, performance wise Xen can do a lot.
>>> >
>>> > Best of luck,
>>> >
>>> > ~Casey
>>> >
>>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>>> > <peter.vandendriessche [at] gmail> wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> I am new to Xen and I was wondering if the following construction
>>> would be
>>> >> feasible with the current Xen.
>>> >>
>>> >> I would like to put 2/3/4 new computers in my house, mainly for
>>> gaming.
>>> >> Instead of buying 2/3/4 different computers, I was thinking of
>>> building one
>>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>>> attach
>>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
>>> >> passthrough. This would save me money on hardware, and it would also
>>> save
>>> >> quite some space on the desk where I wanted to put them.
>>> >>
>>> >> If this is possible, I have a few additional questions about this:
>>> >>
>>> >> 1) Would the speed on each virtual machine be effectively that of a
>>> 2-core
>>> >> CPU with 1 GPU? What about memory speed/latency?
>>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x
>>> Radeon HD
>>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>>> >> 3) How should one configure the machine such that each OS receives
>>> only
>>> >> the input from its own keyboard/mouse?
>>> >> 4) Any other problems or concerns that you can think of?
>>> >>
>>> >> Thanks in advance,
>>> >> Peter
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Xen-users mailing list
>>> >> Xen-users [at] lists
>>> >> http://lists.xen.org/xen-users
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Xen-users mailing list
>>> > Xen-users [at] lists
>>> > http://lists.xen.org/xen-users
>>>
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users [at] lists
>> http://lists.xen.org/xen-users
>>
>
>


rulerof at gmail

May 12, 2012, 11:28 AM

Post #9 of 33 (2305 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Casey,

Wow:

> B)  When you install your ATI drivers, you need to do so on first boot so
> the card is fresh.  If you reboot Windows and not the whole machine before
> trying to install the ATI drivers, the card hasn't been "reset" and either
> the installation will BSOD or if you are successful the drivers are almost
> certainly bugged and you will have problems in the future.  My solution,
> reboot Xen before installing ATI drivers.  OR!  Use the USB Safe Device
> removal and then install them.
>
> To fix your BSOD you may have to safe mode reboot, uninstall the ATI
> drivers, reboot the entire computer (Xen), and then try again.

My first instinct on reading that was to literally facepalm myself.
Thank heavens I wear glasses. :D

So Basically (or perhaps, "in essence") the drivers need to be
installed when the ID of the DomU is 1. Fresh boot of Xen, first
post-Xen boot of the DomU with the device attached. Gonna try that
now :)

I DO recall the FLR thing you mentioned. Haven't run into that yet
because I haven't run into a successful install of the drivers :D

Thank you so much. I most certainly would have screwed it up again I
think! I'm on round 3 of Windows installation. GPLPV is installed,
so let's see how this goes...

Cheers,
Andrew Bobulsky

On Sat, May 12, 2012 at 2:19 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
> Andrew,
>
> I hate that error with a passion, but the good news is I may have figured
> out exactly what causes it thanks to hundreds of encounters and some insight
> from Tobias Geiger's posts on VGA Performance Degradation.
>
> First, it isn't the GPLPV drivers, it's your ATI card/drivers.
>
> You may have noticed that the first boot of your system your ATI card
> performs optimally in Windows, well when you reboot windows and not the
> whole Xen system, the GPU does not get reset.
>
> It has been speculated that this is an FLR bug or perhaps more specifically
> a Windows FLR bug.
>
> The solution, at boot time go to the USB Safe Ejection option, and eject the
> card.  Your screen goes black for 1-3 seconds and it automatically
> reinstalls.  This is essentially a forced FLR, and will fix the performance
> issues... at least until you reboot windows again.
>
>
>
> My Solution(s) to Atikmpag.sys errors:
>
> I encountered this bug in two very specific instances.
>
> A)  If I was using a buggy device, in my case my BlueTooth adapter was dying
> and I didn't realize it until over a week of failed testing.  The buggy
> BlueTooth device was causing ATI's drivers to freak, how they are related is
> beyond me.  In conclusion, try unplugging any extra devices when testing.
>
> B)  When you install your ATI drivers, you need to do so on first boot so
> the card is fresh.  If you reboot Windows and not the whole machine before
> trying to install the ATI drivers, the card hasn't been "reset" and either
> the installation will BSOD or if you are successful the drivers are almost
> certainly bugged and you will have problems in the future.  My solution,
> reboot Xen before installing ATI drivers.  OR!  Use the USB Safe Device
> removal and then install them.
>
>
> To fix your BSOD you may have to safe mode reboot, uninstall the ATI
> drivers, reboot the entire computer (Xen), and then try again.
>
>
> Also, if you install the Windows Update ATI drivers, you're essentially
> screwed since it will automatically reinstall them every boot, which means
> before you can eject the device to force FLR.  The only workaround I have
> found for this is to reinstall Windows.  If anyone knows how to tell Windows
> to "really" delete an installed driver that would be fabulous, but just the
> checkbox on device uninstall doesn't do it when you install the Windows
> Update driver.
>
> Hope that helps with a few things, let me know if I wasn't clear (It's a
> confusing topic to begin with).
>
> ~Casey
>
> On Sat, May 12, 2012 at 2:10 PM, chris <tknchris [at] gmail> wrote:
>>
>> kpartx being one of them! awesome tool for lvm backed domU's
>>
>>
>> On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>>>
>>> Hi Andrew,
>>>
>>> You mean the Windows DomU configuration, right?  I put it up on pastebin
>>> here along with a couple other configuration files:
>>> http://pastebin.com/9E1g1BHf
>>>
>>> I'm just using normal LV partitions and passing them to an HVM, there is
>>> no special trick so any LVM guide should put you on the right track.
>>>
>>> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>>>
>>> **********
>>>
>>> The only convoluted part is my Dom0 installation, since I used EFI boot
>>> and an LV to store root (/), so I have two 256MB partitions, one FAT32 for
>>> EFI, one Ext4 for boot (/boot) and then the rest of the disk to LVM.  I did
>>> the LVM setup right in the installation, added the SSD partition (PV) to a
>>> Volume Group (VG) then threw on a few partitions.
>>>
>>> I created a linux root partition of 8GB, a home partition of 20GB, and a
>>> swap partition of 2GB.  I mapped those in the configuration, then I went on
>>> ahead and made a 140GB partition for windows, and two 4GB partitions for
>>> PFSense and NGinx.
>>>
>>> Once the system is installed, the standard LVM tools can be used,
>>> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>>>
>>> My Disk IO is not optimal, which might be because I run four systems off
>>> the same drive at the same time, so if you intend to use many systems you
>>> may want to split the drives onto multiple physical disks.  However, I have
>>> reason to believe my IO problems are a Xen bug, I just haven't had time to
>>> test/prove it.
>>>
>>> **********
>>>
>>> When you pass a LV to an HVM it treats it like a physical disk, and it
>>> will create a partition table, MBR code, and partitions inside the LV
>>> (partitions within partitions).
>>>
>>> When I get some free time I want to write up a pretty verbose guide on
>>> LVM specifically for Xen, there are plenty of things I've learned about
>>> accessing the partitions too.
>>>
>>> Some things I learned recently with Xen, IDE drives (hdX) only allow four
>>> passed devices, so if you have more than 3 storage partitions you will want
>>> to use SCSI (sdX) for them, but SCSI drives are not bootable.  Hence my
>>> configuration has "hda" for the boot drive (lv partition), and sdX for all
>>> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d, etc).
>>>
>>> **********
>>>
>>> Hope that helps a bit, let me know if you have any other questions or if
>>> that didn't answer them correct.
>>>
>>> ~Casey
>>>
>>>
>>> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail>
>>> wrote:
>>>>
>>>> Hello Casey,
>>>>
>>>> Quick question!
>>>>
>>>> What's the config file entry for the LVM-type setup you have going on
>>>> for the guest disk look like?  Might you be able to point me to a
>>>> guide that'll show me how to set up a disk like that?
>>>>
>>>> Thanks!
>>>>
>>>> -Andrew Bobulsky
>>>>
>>>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail>
>>>> wrote:
>>>> > Hello Peter,
>>>> >
>>>> >
>>>> > Question #1: Performance
>>>> >
>>>> > With x86 Virtualization hardware such as CPU's and Memory are mapped
>>>> > not
>>>> > layered, there should be almost no difference in speeds from running
>>>> > natively.
>>>> >
>>>> > I am running Windows 7 HVM with an ATI Radeon 6870.  My system has
>>>> > 12GB of
>>>> > RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of memory,
>>>> > Windows
>>>> > Experience index gives me 7.5 for CPU and 7.6 for RAM.  With VGA
>>>> > Passthrough
>>>> > I have 7.8 for both graphics scores.  I am running all my systems on
>>>> > LVM
>>>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored
>>>> > 6.2
>>>> > for HDD speeds, with PV drivers it jumped to 7.8.
>>>> >
>>>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>>>> > create a
>>>> > demo video of my system when I get some time (busy with college).
>>>> >
>>>> > My biggest concern right now is Disk IO ranges from excellent to
>>>> > abysmal,
>>>> > but I have a feeling the displayed values and actual speeds might be
>>>> > different.  I'll put putting together an extensive test with this
>>>> > later, but
>>>> > let's just say IO speeds vary (even with PV drivers).  The Disk IO
>>>> > does not
>>>> > appear to have any affect on games from my experience, so it may only
>>>> > be
>>>> > write speeds.  I have not run any disk benchmarks.
>>>> >
>>>> >
>>>> > Question #2: GPU Assignment
>>>> >
>>>> > I have no idea how Dual GPU cards work, so I can't really answer this
>>>> > question.
>>>> >
>>>> > I can advise you to be on the lookout for motherboards with NF200
>>>> > chipsets
>>>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great
>>>> > bought but
>>>> > NF200 is completely incompatible with VT-d, ended up with only one
>>>> > PCIe slot
>>>> > to pass.  I can recommend the ASRock Extreme4 Gen3, got it right now,
>>>> > if I
>>>> > had enough money to buy a bigger PSU and a second GPU I would be doing
>>>> > what
>>>> > you are planning to.
>>>> >
>>>> >
>>>> > Question #3:  Configuration
>>>> >
>>>> > Two approaches to device connection, USB Passthrough and PCI
>>>> > Passthrough.  I
>>>> > haven't tried USB Passthrough, but I have a feeling it wouldn't work
>>>> > with
>>>> > complex devices that require OS drives, such as BlueTooth receivers or
>>>> > an
>>>> > XBox 360 Wireless adapter.
>>>> >
>>>> > I took the second approach of passing the USB Controller, but this
>>>> > will vary
>>>> > by hardware.  The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
>>>> > don't
>>>> > have any idea how you would check this stuff from their manuals, I
>>>> > found out
>>>> > when I ran "lspci" from Linux Dom0.
>>>> >
>>>> > I had no luck with USB 3.0, many devices weren't functional when
>>>> > connected
>>>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my
>>>> > USB 2.0
>>>> > ports.
>>>> >
>>>> > Again hardware specific, one of the bus had 4 ports, the other had
>>>> > only two,
>>>> > I bought a 4 port USB PCI plate and attached the additional USB pins
>>>> > from
>>>> > the board to turn the 2-port into a 6-port controller.
>>>> >
>>>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>>>> > everything else functions great.  With PCI Passed USB I am able to use
>>>> > an
>>>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas
>>>> > of
>>>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a
>>>> > Logitech
>>>> > Wireless Mouse.  I had BlueTooth but I got rid of it, the device
>>>> > itself went
>>>> > bad and was causing my system to BlueScreen.
>>>> >
>>>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>>>> > Logitech
>>>> > C910 webcam, and various devices when connected failed to function
>>>> > right.
>>>> >
>>>> >
>>>> > Question #4:  Other?
>>>> >
>>>> > I am 100% certain you could get a system running 2 Windows 7 HVM's up
>>>> > for
>>>> > gaming, but you may need to daisy chain some USB devices if you want
>>>> > more
>>>> > than just a keyboard and mouse for each.
>>>> >
>>>> > Also, if you are not confident in your ability to work with *nix, I
>>>> > wouldn't
>>>> > advise it.  I had spent two years tinkering with Web Servers in
>>>> > Debian, so I
>>>> > thought I would have an easy time of things.
>>>> >
>>>> > I tried it on a week off, ended up taking me 2 months to complete my
>>>> > setup.
>>>> >  The results are spectacular, but be prepared to spend many hours
>>>> > debugging
>>>> > unless you find a really good guide.
>>>> >
>>>> > I would recommend going for a Two Windows on One Rig, and duplicate
>>>> > that rig
>>>> > for a second machine, and I recommend that for two reasons.  If you
>>>> > are
>>>> > successful with the first machine, you can easily copy the process.
>>>> >  This
>>>> > will save you hours of attempting to get a whole four Gaming machines
>>>> > working on one system.
>>>> >
>>>> >
>>>> > As stated, I only run one gaming machine, but I do have two other
>>>> > HVM's
>>>> > running, one manages my households network and the other is a private
>>>> > web/file server.  So, performance wise Xen can do a lot.
>>>> >
>>>> > Best of luck,
>>>> >
>>>> > ~Casey
>>>> >
>>>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>>>> > <peter.vandendriessche [at] gmail> wrote:
>>>> >>
>>>> >> Hi,
>>>> >>
>>>> >> I am new to Xen and I was wondering if the following construction
>>>> >> would be
>>>> >> feasible with the current Xen.
>>>> >>
>>>> >> I would like to put 2/3/4 new computers in my house, mainly for
>>>> >> gaming.
>>>> >> Instead of buying 2/3/4 different computers, I was thinking of
>>>> >> building one
>>>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>>>> >> attach
>>>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
>>>> >> passthrough. This would save me money on hardware, and it would also
>>>> >> save
>>>> >> quite some space on the desk where I wanted to put them.
>>>> >>
>>>> >> If this is possible, I have a few additional questions about this:
>>>> >>
>>>> >> 1) Would the speed on each virtual machine be effectively that of a
>>>> >> 2-core
>>>> >> CPU with 1 GPU? What about memory speed/latency?
>>>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x
>>>> >> Radeon HD
>>>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>>>> >> 3) How should one configure the machine such that each OS receives
>>>> >> only
>>>> >> the input from its own keyboard/mouse?
>>>> >> 4) Any other problems or concerns that you can think of?
>>>> >>
>>>> >> Thanks in advance,
>>>> >> Peter
>>>> >>
>>>> >>
>>>> >> _______________________________________________
>>>> >> Xen-users mailing list
>>>> >> Xen-users [at] lists
>>>> >> http://lists.xen.org/xen-users
>>>> >
>>>> >
>>>> >
>>>> > _______________________________________________
>>>> > Xen-users mailing list
>>>> > Xen-users [at] lists
>>>> > http://lists.xen.org/xen-users
>>>
>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users [at] lists
>>> http://lists.xen.org/xen-users
>>
>>
>

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


cdelorme at gmail

May 12, 2012, 11:43 AM

Post #10 of 33 (2286 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

More specifically the "first time you initialize the GPU". It's not any of
the operating systems, it is the card itself not getting reset. It appears
to only be a Windows problem, because nobody has reported this issue when
running say Ubuntu with a passed GPU.

So if you have a bunch of HVM's, your Windows can be given ID 100 and still
work, provided it is the first time you have used the GPU (hence first boot
of Windows).

I am glad I could help, and if you have some extra storage space I
recommend using "dd" and a second LV partition to copy a working backup of
Windows post-install before experimenting. It can save you some
time/effort.

~Casey

On Sat, May 12, 2012 at 2:28 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:

> Casey,
>
> Wow:
>
> > B) When you install your ATI drivers, you need to do so on first boot so
> > the card is fresh. If you reboot Windows and not the whole machine
> before
> > trying to install the ATI drivers, the card hasn't been "reset" and
> either
> > the installation will BSOD or if you are successful the drivers are
> almost
> > certainly bugged and you will have problems in the future. My solution,
> > reboot Xen before installing ATI drivers. OR! Use the USB Safe Device
> > removal and then install them.
> >
> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
> > drivers, reboot the entire computer (Xen), and then try again.
>
> My first instinct on reading that was to literally facepalm myself.
> Thank heavens I wear glasses. :D
>
> So Basically (or perhaps, "in essence") the drivers need to be
> installed when the ID of the DomU is 1. Fresh boot of Xen, first
> post-Xen boot of the DomU with the device attached. Gonna try that
> now :)
>
> I DO recall the FLR thing you mentioned. Haven't run into that yet
> because I haven't run into a successful install of the drivers :D
>
> Thank you so much. I most certainly would have screwed it up again I
> think! I'm on round 3 of Windows installation. GPLPV is installed,
> so let's see how this goes...
>
> Cheers,
> Andrew Bobulsky
>
> On Sat, May 12, 2012 at 2:19 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
> > Andrew,
> >
> > I hate that error with a passion, but the good news is I may have figured
> > out exactly what causes it thanks to hundreds of encounters and some
> insight
> > from Tobias Geiger's posts on VGA Performance Degradation.
> >
> > First, it isn't the GPLPV drivers, it's your ATI card/drivers.
> >
> > You may have noticed that the first boot of your system your ATI card
> > performs optimally in Windows, well when you reboot windows and not the
> > whole Xen system, the GPU does not get reset.
> >
> > It has been speculated that this is an FLR bug or perhaps more
> specifically
> > a Windows FLR bug.
> >
> > The solution, at boot time go to the USB Safe Ejection option, and eject
> the
> > card. Your screen goes black for 1-3 seconds and it automatically
> > reinstalls. This is essentially a forced FLR, and will fix the
> performance
> > issues... at least until you reboot windows again.
> >
> >
> >
> > My Solution(s) to Atikmpag.sys errors:
> >
> > I encountered this bug in two very specific instances.
> >
> > A) If I was using a buggy device, in my case my BlueTooth adapter was
> dying
> > and I didn't realize it until over a week of failed testing. The buggy
> > BlueTooth device was causing ATI's drivers to freak, how they are
> related is
> > beyond me. In conclusion, try unplugging any extra devices when testing.
> >
> > B) When you install your ATI drivers, you need to do so on first boot so
> > the card is fresh. If you reboot Windows and not the whole machine
> before
> > trying to install the ATI drivers, the card hasn't been "reset" and
> either
> > the installation will BSOD or if you are successful the drivers are
> almost
> > certainly bugged and you will have problems in the future. My solution,
> > reboot Xen before installing ATI drivers. OR! Use the USB Safe Device
> > removal and then install them.
> >
> >
> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
> > drivers, reboot the entire computer (Xen), and then try again.
> >
> >
> > Also, if you install the Windows Update ATI drivers, you're essentially
> > screwed since it will automatically reinstall them every boot, which
> means
> > before you can eject the device to force FLR. The only workaround I have
> > found for this is to reinstall Windows. If anyone knows how to tell
> Windows
> > to "really" delete an installed driver that would be fabulous, but just
> the
> > checkbox on device uninstall doesn't do it when you install the Windows
> > Update driver.
> >
> > Hope that helps with a few things, let me know if I wasn't clear (It's a
> > confusing topic to begin with).
> >
> > ~Casey
> >
> > On Sat, May 12, 2012 at 2:10 PM, chris <tknchris [at] gmail> wrote:
> >>
> >> kpartx being one of them! awesome tool for lvm backed domU's
> >>
> >>
> >> On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail>
> wrote:
> >>>
> >>> Hi Andrew,
> >>>
> >>> You mean the Windows DomU configuration, right? I put it up on
> pastebin
> >>> here along with a couple other configuration files:
> >>> http://pastebin.com/9E1g1BHf
> >>>
> >>> I'm just using normal LV partitions and passing them to an HVM, there
> is
> >>> no special trick so any LVM guide should put you on the right track.
> >>>
> >>> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
> >>>
> >>> **********
> >>>
> >>> The only convoluted part is my Dom0 installation, since I used EFI boot
> >>> and an LV to store root (/), so I have two 256MB partitions, one FAT32
> for
> >>> EFI, one Ext4 for boot (/boot) and then the rest of the disk to LVM.
> I did
> >>> the LVM setup right in the installation, added the SSD partition (PV)
> to a
> >>> Volume Group (VG) then threw on a few partitions.
> >>>
> >>> I created a linux root partition of 8GB, a home partition of 20GB, and
> a
> >>> swap partition of 2GB. I mapped those in the configuration, then I
> went on
> >>> ahead and made a 140GB partition for windows, and two 4GB partitions
> for
> >>> PFSense and NGinx.
> >>>
> >>> Once the system is installed, the standard LVM tools can be used,
> >>> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
> >>>
> >>> My Disk IO is not optimal, which might be because I run four systems
> off
> >>> the same drive at the same time, so if you intend to use many systems
> you
> >>> may want to split the drives onto multiple physical disks. However, I
> have
> >>> reason to believe my IO problems are a Xen bug, I just haven't had
> time to
> >>> test/prove it.
> >>>
> >>> **********
> >>>
> >>> When you pass a LV to an HVM it treats it like a physical disk, and it
> >>> will create a partition table, MBR code, and partitions inside the LV
> >>> (partitions within partitions).
> >>>
> >>> When I get some free time I want to write up a pretty verbose guide on
> >>> LVM specifically for Xen, there are plenty of things I've learned about
> >>> accessing the partitions too.
> >>>
> >>> Some things I learned recently with Xen, IDE drives (hdX) only allow
> four
> >>> passed devices, so if you have more than 3 storage partitions you will
> want
> >>> to use SCSI (sdX) for them, but SCSI drives are not bootable. Hence my
> >>> configuration has "hda" for the boot drive (lv partition), and sdX for
> all
> >>> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d,
> etc).
> >>>
> >>> **********
> >>>
> >>> Hope that helps a bit, let me know if you have any other questions or
> if
> >>> that didn't answer them correct.
> >>>
> >>> ~Casey
> >>>
> >>>
> >>> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail>
> >>> wrote:
> >>>>
> >>>> Hello Casey,
> >>>>
> >>>> Quick question!
> >>>>
> >>>> What's the config file entry for the LVM-type setup you have going on
> >>>> for the guest disk look like? Might you be able to point me to a
> >>>> guide that'll show me how to set up a disk like that?
> >>>>
> >>>> Thanks!
> >>>>
> >>>> -Andrew Bobulsky
> >>>>
> >>>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail>
> >>>> wrote:
> >>>> > Hello Peter,
> >>>> >
> >>>> >
> >>>> > Question #1: Performance
> >>>> >
> >>>> > With x86 Virtualization hardware such as CPU's and Memory are mapped
> >>>> > not
> >>>> > layered, there should be almost no difference in speeds from running
> >>>> > natively.
> >>>> >
> >>>> > I am running Windows 7 HVM with an ATI Radeon 6870. My system has
> >>>> > 12GB of
> >>>> > RAM, and a Core i7 2600. I gave Windows 4 vcores and 6GB of memory,
> >>>> > Windows
> >>>> > Experience index gives me 7.5 for CPU and 7.6 for RAM. With VGA
> >>>> > Passthrough
> >>>> > I have 7.8 for both graphics scores. I am running all my systems on
> >>>> > LVM
> >>>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows
> scored
> >>>> > 6.2
> >>>> > for HDD speeds, with PV drivers it jumped to 7.8.
> >>>> >
> >>>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
> >>>> > create a
> >>>> > demo video of my system when I get some time (busy with college).
> >>>> >
> >>>> > My biggest concern right now is Disk IO ranges from excellent to
> >>>> > abysmal,
> >>>> > but I have a feeling the displayed values and actual speeds might be
> >>>> > different. I'll put putting together an extensive test with this
> >>>> > later, but
> >>>> > let's just say IO speeds vary (even with PV drivers). The Disk IO
> >>>> > does not
> >>>> > appear to have any affect on games from my experience, so it may
> only
> >>>> > be
> >>>> > write speeds. I have not run any disk benchmarks.
> >>>> >
> >>>> >
> >>>> > Question #2: GPU Assignment
> >>>> >
> >>>> > I have no idea how Dual GPU cards work, so I can't really answer
> this
> >>>> > question.
> >>>> >
> >>>> > I can advise you to be on the lookout for motherboards with NF200
> >>>> > chipsets
> >>>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great
> >>>> > bought but
> >>>> > NF200 is completely incompatible with VT-d, ended up with only one
> >>>> > PCIe slot
> >>>> > to pass. I can recommend the ASRock Extreme4 Gen3, got it right
> now,
> >>>> > if I
> >>>> > had enough money to buy a bigger PSU and a second GPU I would be
> doing
> >>>> > what
> >>>> > you are planning to.
> >>>> >
> >>>> >
> >>>> > Question #3: Configuration
> >>>> >
> >>>> > Two approaches to device connection, USB Passthrough and PCI
> >>>> > Passthrough. I
> >>>> > haven't tried USB Passthrough, but I have a feeling it wouldn't work
> >>>> > with
> >>>> > complex devices that require OS drives, such as BlueTooth receivers
> or
> >>>> > an
> >>>> > XBox 360 Wireless adapter.
> >>>> >
> >>>> > I took the second approach of passing the USB Controller, but this
> >>>> > will vary
> >>>> > by hardware. The ASRock Extreme4 Gen3 has four USB PCI
> Controllers, I
> >>>> > don't
> >>>> > have any idea how you would check this stuff from their manuals, I
> >>>> > found out
> >>>> > when I ran "lspci" from Linux Dom0.
> >>>> >
> >>>> > I had no luck with USB 3.0, many devices weren't functional when
> >>>> > connected
> >>>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my
> >>>> > USB 2.0
> >>>> > ports.
> >>>> >
> >>>> > Again hardware specific, one of the bus had 4 ports, the other had
> >>>> > only two,
> >>>> > I bought a 4 port USB PCI plate and attached the additional USB pins
> >>>> > from
> >>>> > the board to turn the 2-port into a 6-port controller.
> >>>> >
> >>>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
> >>>> > everything else functions great. With PCI Passed USB I am able to
> use
> >>>> > an
> >>>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different
> areas
> >>>> > of
> >>>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a
> >>>> > Logitech
> >>>> > Wireless Mouse. I had BlueTooth but I got rid of it, the device
> >>>> > itself went
> >>>> > bad and was causing my system to BlueScreen.
> >>>> >
> >>>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
> >>>> > Logitech
> >>>> > C910 webcam, and various devices when connected failed to function
> >>>> > right.
> >>>> >
> >>>> >
> >>>> > Question #4: Other?
> >>>> >
> >>>> > I am 100% certain you could get a system running 2 Windows 7 HVM's
> up
> >>>> > for
> >>>> > gaming, but you may need to daisy chain some USB devices if you want
> >>>> > more
> >>>> > than just a keyboard and mouse for each.
> >>>> >
> >>>> > Also, if you are not confident in your ability to work with *nix, I
> >>>> > wouldn't
> >>>> > advise it. I had spent two years tinkering with Web Servers in
> >>>> > Debian, so I
> >>>> > thought I would have an easy time of things.
> >>>> >
> >>>> > I tried it on a week off, ended up taking me 2 months to complete my
> >>>> > setup.
> >>>> > The results are spectacular, but be prepared to spend many hours
> >>>> > debugging
> >>>> > unless you find a really good guide.
> >>>> >
> >>>> > I would recommend going for a Two Windows on One Rig, and duplicate
> >>>> > that rig
> >>>> > for a second machine, and I recommend that for two reasons. If you
> >>>> > are
> >>>> > successful with the first machine, you can easily copy the process.
> >>>> > This
> >>>> > will save you hours of attempting to get a whole four Gaming
> machines
> >>>> > working on one system.
> >>>> >
> >>>> >
> >>>> > As stated, I only run one gaming machine, but I do have two other
> >>>> > HVM's
> >>>> > running, one manages my households network and the other is a
> private
> >>>> > web/file server. So, performance wise Xen can do a lot.
> >>>> >
> >>>> > Best of luck,
> >>>> >
> >>>> > ~Casey
> >>>> >
> >>>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
> >>>> > <peter.vandendriessche [at] gmail> wrote:
> >>>> >>
> >>>> >> Hi,
> >>>> >>
> >>>> >> I am new to Xen and I was wondering if the following construction
> >>>> >> would be
> >>>> >> feasible with the current Xen.
> >>>> >>
> >>>> >> I would like to put 2/3/4 new computers in my house, mainly for
> >>>> >> gaming.
> >>>> >> Instead of buying 2/3/4 different computers, I was thinking of
> >>>> >> building one
> >>>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
> >>>> >> attach
> >>>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
> >>>> >> passthrough. This would save me money on hardware, and it would
> also
> >>>> >> save
> >>>> >> quite some space on the desk where I wanted to put them.
> >>>> >>
> >>>> >> If this is possible, I have a few additional questions about this:
> >>>> >>
> >>>> >> 1) Would the speed on each virtual machine be effectively that of a
> >>>> >> 2-core
> >>>> >> CPU with 1 GPU? What about memory speed/latency?
> >>>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x
> >>>> >> Radeon HD
> >>>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
> >>>> >> 3) How should one configure the machine such that each OS receives
> >>>> >> only
> >>>> >> the input from its own keyboard/mouse?
> >>>> >> 4) Any other problems or concerns that you can think of?
> >>>> >>
> >>>> >> Thanks in advance,
> >>>> >> Peter
> >>>> >>
> >>>> >>
> >>>> >> _______________________________________________
> >>>> >> Xen-users mailing list
> >>>> >> Xen-users [at] lists
> >>>> >> http://lists.xen.org/xen-users
> >>>> >
> >>>> >
> >>>> >
> >>>> > _______________________________________________
> >>>> > Xen-users mailing list
> >>>> > Xen-users [at] lists
> >>>> > http://lists.xen.org/xen-users
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Xen-users mailing list
> >>> Xen-users [at] lists
> >>> http://lists.xen.org/xen-users
> >>
> >>
> >
>


rulerof at gmail

May 12, 2012, 1:00 PM

Post #11 of 33 (2302 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Welp, even with the FLR reset, both by restarting the system and safe
removal... still get the atikmpag.sys bsod :(

Gonna try removing the drivers, removing GPLPV, and doing it in reverse.

Let's see what we get! :)

On Sat, May 12, 2012 at 2:43 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
> More specifically the "first time you initialize the GPU".  It's not any of
> the operating systems, it is the card itself not getting reset.  It appears
> to only be a Windows problem, because nobody has reported this issue when
> running say Ubuntu with a passed GPU.
>
> So if you have a bunch of HVM's, your Windows can be given ID 100 and still
> work, provided it is the first time you have used the GPU (hence first boot
> of Windows).
>
> I am glad I could help, and if you have some extra storage space I recommend
> using "dd" and a second LV partition to copy a working backup of Windows
> post-install before experimenting.  It can save you some time/effort.
>
> ~Casey
>
> On Sat, May 12, 2012 at 2:28 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:
>>
>> Casey,
>>
>> Wow:
>>
>> > B)  When you install your ATI drivers, you need to do so on first boot
>> > so
>> > the card is fresh.  If you reboot Windows and not the whole machine
>> > before
>> > trying to install the ATI drivers, the card hasn't been "reset" and
>> > either
>> > the installation will BSOD or if you are successful the drivers are
>> > almost
>> > certainly bugged and you will have problems in the future.  My solution,
>> > reboot Xen before installing ATI drivers.  OR!  Use the USB Safe Device
>> > removal and then install them.
>> >
>> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
>> > drivers, reboot the entire computer (Xen), and then try again.
>>
>> My first instinct on reading that was to literally facepalm myself.
>> Thank heavens I wear glasses.  :D
>>
>> So Basically (or perhaps, "in essence") the drivers need to be
>> installed when the ID of the DomU is 1.  Fresh boot of Xen, first
>> post-Xen boot of the DomU with the device attached.  Gonna try that
>> now :)
>>
>> I DO recall the FLR thing you mentioned.  Haven't run into that yet
>> because I haven't run into a successful install of the drivers :D
>>
>> Thank you so much.  I most certainly would have screwed it up again I
>> think!  I'm on round 3 of Windows installation.  GPLPV is installed,
>> so let's see how this goes...
>>
>> Cheers,
>> Andrew Bobulsky
>>
>> On Sat, May 12, 2012 at 2:19 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>> > Andrew,
>> >
>> > I hate that error with a passion, but the good news is I may have
>> > figured
>> > out exactly what causes it thanks to hundreds of encounters and some
>> > insight
>> > from Tobias Geiger's posts on VGA Performance Degradation.
>> >
>> > First, it isn't the GPLPV drivers, it's your ATI card/drivers.
>> >
>> > You may have noticed that the first boot of your system your ATI card
>> > performs optimally in Windows, well when you reboot windows and not the
>> > whole Xen system, the GPU does not get reset.
>> >
>> > It has been speculated that this is an FLR bug or perhaps more
>> > specifically
>> > a Windows FLR bug.
>> >
>> > The solution, at boot time go to the USB Safe Ejection option, and eject
>> > the
>> > card.  Your screen goes black for 1-3 seconds and it automatically
>> > reinstalls.  This is essentially a forced FLR, and will fix the
>> > performance
>> > issues... at least until you reboot windows again.
>> >
>> >
>> >
>> > My Solution(s) to Atikmpag.sys errors:
>> >
>> > I encountered this bug in two very specific instances.
>> >
>> > A)  If I was using a buggy device, in my case my BlueTooth adapter was
>> > dying
>> > and I didn't realize it until over a week of failed testing.  The buggy
>> > BlueTooth device was causing ATI's drivers to freak, how they are
>> > related is
>> > beyond me.  In conclusion, try unplugging any extra devices when
>> > testing.
>> >
>> > B)  When you install your ATI drivers, you need to do so on first boot
>> > so
>> > the card is fresh.  If you reboot Windows and not the whole machine
>> > before
>> > trying to install the ATI drivers, the card hasn't been "reset" and
>> > either
>> > the installation will BSOD or if you are successful the drivers are
>> > almost
>> > certainly bugged and you will have problems in the future.  My solution,
>> > reboot Xen before installing ATI drivers.  OR!  Use the USB Safe Device
>> > removal and then install them.
>> >
>> >
>> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
>> > drivers, reboot the entire computer (Xen), and then try again.
>> >
>> >
>> > Also, if you install the Windows Update ATI drivers, you're essentially
>> > screwed since it will automatically reinstall them every boot, which
>> > means
>> > before you can eject the device to force FLR.  The only workaround I
>> > have
>> > found for this is to reinstall Windows.  If anyone knows how to tell
>> > Windows
>> > to "really" delete an installed driver that would be fabulous, but just
>> > the
>> > checkbox on device uninstall doesn't do it when you install the Windows
>> > Update driver.
>> >
>> > Hope that helps with a few things, let me know if I wasn't clear (It's a
>> > confusing topic to begin with).
>> >
>> > ~Casey
>> >
>> > On Sat, May 12, 2012 at 2:10 PM, chris <tknchris [at] gmail> wrote:
>> >>
>> >> kpartx being one of them! awesome tool for lvm backed domU's
>> >>
>> >>
>> >> On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail>
>> >> wrote:
>> >>>
>> >>> Hi Andrew,
>> >>>
>> >>> You mean the Windows DomU configuration, right?  I put it up on
>> >>> pastebin
>> >>> here along with a couple other configuration files:
>> >>> http://pastebin.com/9E1g1BHf
>> >>>
>> >>> I'm just using normal LV partitions and passing them to an HVM, there
>> >>> is
>> >>> no special trick so any LVM guide should put you on the right track.
>> >>>
>> >>> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>> >>>
>> >>> **********
>> >>>
>> >>> The only convoluted part is my Dom0 installation, since I used EFI
>> >>> boot
>> >>> and an LV to store root (/), so I have two 256MB partitions, one FAT32
>> >>> for
>> >>> EFI, one Ext4 for boot (/boot) and then the rest of the disk to LVM.
>> >>>  I did
>> >>> the LVM setup right in the installation, added the SSD partition (PV)
>> >>> to a
>> >>> Volume Group (VG) then threw on a few partitions.
>> >>>
>> >>> I created a linux root partition of 8GB, a home partition of 20GB, and
>> >>> a
>> >>> swap partition of 2GB.  I mapped those in the configuration, then I
>> >>> went on
>> >>> ahead and made a 140GB partition for windows, and two 4GB partitions
>> >>> for
>> >>> PFSense and NGinx.
>> >>>
>> >>> Once the system is installed, the standard LVM tools can be used,
>> >>> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>> >>>
>> >>> My Disk IO is not optimal, which might be because I run four systems
>> >>> off
>> >>> the same drive at the same time, so if you intend to use many systems
>> >>> you
>> >>> may want to split the drives onto multiple physical disks.  However, I
>> >>> have
>> >>> reason to believe my IO problems are a Xen bug, I just haven't had
>> >>> time to
>> >>> test/prove it.
>> >>>
>> >>> **********
>> >>>
>> >>> When you pass a LV to an HVM it treats it like a physical disk, and it
>> >>> will create a partition table, MBR code, and partitions inside the LV
>> >>> (partitions within partitions).
>> >>>
>> >>> When I get some free time I want to write up a pretty verbose guide on
>> >>> LVM specifically for Xen, there are plenty of things I've learned
>> >>> about
>> >>> accessing the partitions too.
>> >>>
>> >>> Some things I learned recently with Xen, IDE drives (hdX) only allow
>> >>> four
>> >>> passed devices, so if you have more than 3 storage partitions you will
>> >>> want
>> >>> to use SCSI (sdX) for them, but SCSI drives are not bootable.  Hence
>> >>> my
>> >>> configuration has "hda" for the boot drive (lv partition), and sdX for
>> >>> all
>> >>> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d,
>> >>> etc).
>> >>>
>> >>> **********
>> >>>
>> >>> Hope that helps a bit, let me know if you have any other questions or
>> >>> if
>> >>> that didn't answer them correct.
>> >>>
>> >>> ~Casey
>> >>>
>> >>>
>> >>> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail>
>> >>> wrote:
>> >>>>
>> >>>> Hello Casey,
>> >>>>
>> >>>> Quick question!
>> >>>>
>> >>>> What's the config file entry for the LVM-type setup you have going on
>> >>>> for the guest disk look like?  Might you be able to point me to a
>> >>>> guide that'll show me how to set up a disk like that?
>> >>>>
>> >>>> Thanks!
>> >>>>
>> >>>> -Andrew Bobulsky
>> >>>>
>> >>>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail>
>> >>>> wrote:
>> >>>> > Hello Peter,
>> >>>> >
>> >>>> >
>> >>>> > Question #1: Performance
>> >>>> >
>> >>>> > With x86 Virtualization hardware such as CPU's and Memory are
>> >>>> > mapped
>> >>>> > not
>> >>>> > layered, there should be almost no difference in speeds from
>> >>>> > running
>> >>>> > natively.
>> >>>> >
>> >>>> > I am running Windows 7 HVM with an ATI Radeon 6870.  My system has
>> >>>> > 12GB of
>> >>>> > RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of
>> >>>> > memory,
>> >>>> > Windows
>> >>>> > Experience index gives me 7.5 for CPU and 7.6 for RAM.  With VGA
>> >>>> > Passthrough
>> >>>> > I have 7.8 for both graphics scores.  I am running all my systems
>> >>>> > on
>> >>>> > LVM
>> >>>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows
>> >>>> > scored
>> >>>> > 6.2
>> >>>> > for HDD speeds, with PV drivers it jumped to 7.8.
>> >>>> >
>> >>>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>> >>>> > create a
>> >>>> > demo video of my system when I get some time (busy with college).
>> >>>> >
>> >>>> > My biggest concern right now is Disk IO ranges from excellent to
>> >>>> > abysmal,
>> >>>> > but I have a feeling the displayed values and actual speeds might
>> >>>> > be
>> >>>> > different.  I'll put putting together an extensive test with this
>> >>>> > later, but
>> >>>> > let's just say IO speeds vary (even with PV drivers).  The Disk IO
>> >>>> > does not
>> >>>> > appear to have any affect on games from my experience, so it may
>> >>>> > only
>> >>>> > be
>> >>>> > write speeds.  I have not run any disk benchmarks.
>> >>>> >
>> >>>> >
>> >>>> > Question #2: GPU Assignment
>> >>>> >
>> >>>> > I have no idea how Dual GPU cards work, so I can't really answer
>> >>>> > this
>> >>>> > question.
>> >>>> >
>> >>>> > I can advise you to be on the lookout for motherboards with NF200
>> >>>> > chipsets
>> >>>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great
>> >>>> > bought but
>> >>>> > NF200 is completely incompatible with VT-d, ended up with only one
>> >>>> > PCIe slot
>> >>>> > to pass.  I can recommend the ASRock Extreme4 Gen3, got it right
>> >>>> > now,
>> >>>> > if I
>> >>>> > had enough money to buy a bigger PSU and a second GPU I would be
>> >>>> > doing
>> >>>> > what
>> >>>> > you are planning to.
>> >>>> >
>> >>>> >
>> >>>> > Question #3:  Configuration
>> >>>> >
>> >>>> > Two approaches to device connection, USB Passthrough and PCI
>> >>>> > Passthrough.  I
>> >>>> > haven't tried USB Passthrough, but I have a feeling it wouldn't
>> >>>> > work
>> >>>> > with
>> >>>> > complex devices that require OS drives, such as BlueTooth receivers
>> >>>> > or
>> >>>> > an
>> >>>> > XBox 360 Wireless adapter.
>> >>>> >
>> >>>> > I took the second approach of passing the USB Controller, but this
>> >>>> > will vary
>> >>>> > by hardware.  The ASRock Extreme4 Gen3 has four USB PCI
>> >>>> > Controllers, I
>> >>>> > don't
>> >>>> > have any idea how you would check this stuff from their manuals, I
>> >>>> > found out
>> >>>> > when I ran "lspci" from Linux Dom0.
>> >>>> >
>> >>>> > I had no luck with USB 3.0, many devices weren't functional when
>> >>>> > connected
>> >>>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all
>> >>>> > my
>> >>>> > USB 2.0
>> >>>> > ports.
>> >>>> >
>> >>>> > Again hardware specific, one of the bus had 4 ports, the other had
>> >>>> > only two,
>> >>>> > I bought a 4 port USB PCI plate and attached the additional USB
>> >>>> > pins
>> >>>> > from
>> >>>> > the board to turn the 2-port into a 6-port controller.
>> >>>> >
>> >>>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>> >>>> > everything else functions great.  With PCI Passed USB I am able to
>> >>>> > use
>> >>>> > an
>> >>>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different
>> >>>> > areas
>> >>>> > of
>> >>>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a
>> >>>> > Logitech
>> >>>> > Wireless Mouse.  I had BlueTooth but I got rid of it, the device
>> >>>> > itself went
>> >>>> > bad and was causing my system to BlueScreen.
>> >>>> >
>> >>>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>> >>>> > Logitech
>> >>>> > C910 webcam, and various devices when connected failed to function
>> >>>> > right.
>> >>>> >
>> >>>> >
>> >>>> > Question #4:  Other?
>> >>>> >
>> >>>> > I am 100% certain you could get a system running 2 Windows 7 HVM's
>> >>>> > up
>> >>>> > for
>> >>>> > gaming, but you may need to daisy chain some USB devices if you
>> >>>> > want
>> >>>> > more
>> >>>> > than just a keyboard and mouse for each.
>> >>>> >
>> >>>> > Also, if you are not confident in your ability to work with *nix, I
>> >>>> > wouldn't
>> >>>> > advise it.  I had spent two years tinkering with Web Servers in
>> >>>> > Debian, so I
>> >>>> > thought I would have an easy time of things.
>> >>>> >
>> >>>> > I tried it on a week off, ended up taking me 2 months to complete
>> >>>> > my
>> >>>> > setup.
>> >>>> >  The results are spectacular, but be prepared to spend many hours
>> >>>> > debugging
>> >>>> > unless you find a really good guide.
>> >>>> >
>> >>>> > I would recommend going for a Two Windows on One Rig, and duplicate
>> >>>> > that rig
>> >>>> > for a second machine, and I recommend that for two reasons.  If you
>> >>>> > are
>> >>>> > successful with the first machine, you can easily copy the process.
>> >>>> >  This
>> >>>> > will save you hours of attempting to get a whole four Gaming
>> >>>> > machines
>> >>>> > working on one system.
>> >>>> >
>> >>>> >
>> >>>> > As stated, I only run one gaming machine, but I do have two other
>> >>>> > HVM's
>> >>>> > running, one manages my households network and the other is a
>> >>>> > private
>> >>>> > web/file server.  So, performance wise Xen can do a lot.
>> >>>> >
>> >>>> > Best of luck,
>> >>>> >
>> >>>> > ~Casey
>> >>>> >
>> >>>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>> >>>> > <peter.vandendriessche [at] gmail> wrote:
>> >>>> >>
>> >>>> >> Hi,
>> >>>> >>
>> >>>> >> I am new to Xen and I was wondering if the following construction
>> >>>> >> would be
>> >>>> >> feasible with the current Xen.
>> >>>> >>
>> >>>> >> I would like to put 2/3/4 new computers in my house, mainly for
>> >>>> >> gaming.
>> >>>> >> Instead of buying 2/3/4 different computers, I was thinking of
>> >>>> >> building one
>> >>>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>> >>>> >> attach
>> >>>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run
>> >>>> >> VGA
>> >>>> >> passthrough. This would save me money on hardware, and it would
>> >>>> >> also
>> >>>> >> save
>> >>>> >> quite some space on the desk where I wanted to put them.
>> >>>> >>
>> >>>> >> If this is possible, I have a few additional questions about this:
>> >>>> >>
>> >>>> >> 1) Would the speed on each virtual machine be effectively that of
>> >>>> >> a
>> >>>> >> 2-core
>> >>>> >> CPU with 1 GPU? What about memory speed/latency?
>> >>>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x
>> >>>> >> Radeon HD
>> >>>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>> >>>> >> 3) How should one configure the machine such that each OS receives
>> >>>> >> only
>> >>>> >> the input from its own keyboard/mouse?
>> >>>> >> 4) Any other problems or concerns that you can think of?
>> >>>> >>
>> >>>> >> Thanks in advance,
>> >>>> >> Peter
>> >>>> >>
>> >>>> >>
>> >>>> >> _______________________________________________
>> >>>> >> Xen-users mailing list
>> >>>> >> Xen-users [at] lists
>> >>>> >> http://lists.xen.org/xen-users
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > _______________________________________________
>> >>>> > Xen-users mailing list
>> >>>> > Xen-users [at] lists
>> >>>> > http://lists.xen.org/xen-users
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Xen-users mailing list
>> >>> Xen-users [at] lists
>> >>> http://lists.xen.org/xen-users
>> >>
>> >>
>> >
>
>

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


rulerof at gmail

May 12, 2012, 1:06 PM

Post #12 of 33 (2289 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Sorry buy another question if you could comment, Casey:

Is this normal when using GPLPV? (screenshot attached)

I'd love to actually get this running solidly, and help to write that
guide of yours ;)

Cheers,
Andrew Bobulsky


On Sat, May 12, 2012 at 4:00 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:
> Welp, even with the FLR reset, both by restarting the system and safe
> removal... still get the atikmpag.sys bsod :(
>
> Gonna try removing the drivers, removing GPLPV, and doing it in reverse.
>
> Let's see what we get! :)
>
> On Sat, May 12, 2012 at 2:43 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>> More specifically the "first time you initialize the GPU".  It's not any of
>> the operating systems, it is the card itself not getting reset.  It appears
>> to only be a Windows problem, because nobody has reported this issue when
>> running say Ubuntu with a passed GPU.
>>
>> So if you have a bunch of HVM's, your Windows can be given ID 100 and still
>> work, provided it is the first time you have used the GPU (hence first boot
>> of Windows).
>>
>> I am glad I could help, and if you have some extra storage space I recommend
>> using "dd" and a second LV partition to copy a working backup of Windows
>> post-install before experimenting.  It can save you some time/effort.
>>
>> ~Casey
>>
>> On Sat, May 12, 2012 at 2:28 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:
>>>
>>> Casey,
>>>
>>> Wow:
>>>
>>> > B)  When you install your ATI drivers, you need to do so on first boot
>>> > so
>>> > the card is fresh.  If you reboot Windows and not the whole machine
>>> > before
>>> > trying to install the ATI drivers, the card hasn't been "reset" and
>>> > either
>>> > the installation will BSOD or if you are successful the drivers are
>>> > almost
>>> > certainly bugged and you will have problems in the future.  My solution,
>>> > reboot Xen before installing ATI drivers.  OR!  Use the USB Safe Device
>>> > removal and then install them.
>>> >
>>> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
>>> > drivers, reboot the entire computer (Xen), and then try again.
>>>
>>> My first instinct on reading that was to literally facepalm myself.
>>> Thank heavens I wear glasses.  :D
>>>
>>> So Basically (or perhaps, "in essence") the drivers need to be
>>> installed when the ID of the DomU is 1.  Fresh boot of Xen, first
>>> post-Xen boot of the DomU with the device attached.  Gonna try that
>>> now :)
>>>
>>> I DO recall the FLR thing you mentioned.  Haven't run into that yet
>>> because I haven't run into a successful install of the drivers :D
>>>
>>> Thank you so much.  I most certainly would have screwed it up again I
>>> think!  I'm on round 3 of Windows installation.  GPLPV is installed,
>>> so let's see how this goes...
>>>
>>> Cheers,
>>> Andrew Bobulsky
>>>
>>> On Sat, May 12, 2012 at 2:19 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>>> > Andrew,
>>> >
>>> > I hate that error with a passion, but the good news is I may have
>>> > figured
>>> > out exactly what causes it thanks to hundreds of encounters and some
>>> > insight
>>> > from Tobias Geiger's posts on VGA Performance Degradation.
>>> >
>>> > First, it isn't the GPLPV drivers, it's your ATI card/drivers.
>>> >
>>> > You may have noticed that the first boot of your system your ATI card
>>> > performs optimally in Windows, well when you reboot windows and not the
>>> > whole Xen system, the GPU does not get reset.
>>> >
>>> > It has been speculated that this is an FLR bug or perhaps more
>>> > specifically
>>> > a Windows FLR bug.
>>> >
>>> > The solution, at boot time go to the USB Safe Ejection option, and eject
>>> > the
>>> > card.  Your screen goes black for 1-3 seconds and it automatically
>>> > reinstalls.  This is essentially a forced FLR, and will fix the
>>> > performance
>>> > issues... at least until you reboot windows again.
>>> >
>>> >
>>> >
>>> > My Solution(s) to Atikmpag.sys errors:
>>> >
>>> > I encountered this bug in two very specific instances.
>>> >
>>> > A)  If I was using a buggy device, in my case my BlueTooth adapter was
>>> > dying
>>> > and I didn't realize it until over a week of failed testing.  The buggy
>>> > BlueTooth device was causing ATI's drivers to freak, how they are
>>> > related is
>>> > beyond me.  In conclusion, try unplugging any extra devices when
>>> > testing.
>>> >
>>> > B)  When you install your ATI drivers, you need to do so on first boot
>>> > so
>>> > the card is fresh.  If you reboot Windows and not the whole machine
>>> > before
>>> > trying to install the ATI drivers, the card hasn't been "reset" and
>>> > either
>>> > the installation will BSOD or if you are successful the drivers are
>>> > almost
>>> > certainly bugged and you will have problems in the future.  My solution,
>>> > reboot Xen before installing ATI drivers.  OR!  Use the USB Safe Device
>>> > removal and then install them.
>>> >
>>> >
>>> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
>>> > drivers, reboot the entire computer (Xen), and then try again.
>>> >
>>> >
>>> > Also, if you install the Windows Update ATI drivers, you're essentially
>>> > screwed since it will automatically reinstall them every boot, which
>>> > means
>>> > before you can eject the device to force FLR.  The only workaround I
>>> > have
>>> > found for this is to reinstall Windows.  If anyone knows how to tell
>>> > Windows
>>> > to "really" delete an installed driver that would be fabulous, but just
>>> > the
>>> > checkbox on device uninstall doesn't do it when you install the Windows
>>> > Update driver.
>>> >
>>> > Hope that helps with a few things, let me know if I wasn't clear (It's a
>>> > confusing topic to begin with).
>>> >
>>> > ~Casey
>>> >
>>> > On Sat, May 12, 2012 at 2:10 PM, chris <tknchris [at] gmail> wrote:
>>> >>
>>> >> kpartx being one of them! awesome tool for lvm backed domU's
>>> >>
>>> >>
>>> >> On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail>
>>> >> wrote:
>>> >>>
>>> >>> Hi Andrew,
>>> >>>
>>> >>> You mean the Windows DomU configuration, right?  I put it up on
>>> >>> pastebin
>>> >>> here along with a couple other configuration files:
>>> >>> http://pastebin.com/9E1g1BHf
>>> >>>
>>> >>> I'm just using normal LV partitions and passing them to an HVM, there
>>> >>> is
>>> >>> no special trick so any LVM guide should put you on the right track.
>>> >>>
>>> >>> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>>> >>>
>>> >>> **********
>>> >>>
>>> >>> The only convoluted part is my Dom0 installation, since I used EFI
>>> >>> boot
>>> >>> and an LV to store root (/), so I have two 256MB partitions, one FAT32
>>> >>> for
>>> >>> EFI, one Ext4 for boot (/boot) and then the rest of the disk to LVM.
>>> >>>  I did
>>> >>> the LVM setup right in the installation, added the SSD partition (PV)
>>> >>> to a
>>> >>> Volume Group (VG) then threw on a few partitions.
>>> >>>
>>> >>> I created a linux root partition of 8GB, a home partition of 20GB, and
>>> >>> a
>>> >>> swap partition of 2GB.  I mapped those in the configuration, then I
>>> >>> went on
>>> >>> ahead and made a 140GB partition for windows, and two 4GB partitions
>>> >>> for
>>> >>> PFSense and NGinx.
>>> >>>
>>> >>> Once the system is installed, the standard LVM tools can be used,
>>> >>> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>>> >>>
>>> >>> My Disk IO is not optimal, which might be because I run four systems
>>> >>> off
>>> >>> the same drive at the same time, so if you intend to use many systems
>>> >>> you
>>> >>> may want to split the drives onto multiple physical disks.  However, I
>>> >>> have
>>> >>> reason to believe my IO problems are a Xen bug, I just haven't had
>>> >>> time to
>>> >>> test/prove it.
>>> >>>
>>> >>> **********
>>> >>>
>>> >>> When you pass a LV to an HVM it treats it like a physical disk, and it
>>> >>> will create a partition table, MBR code, and partitions inside the LV
>>> >>> (partitions within partitions).
>>> >>>
>>> >>> When I get some free time I want to write up a pretty verbose guide on
>>> >>> LVM specifically for Xen, there are plenty of things I've learned
>>> >>> about
>>> >>> accessing the partitions too.
>>> >>>
>>> >>> Some things I learned recently with Xen, IDE drives (hdX) only allow
>>> >>> four
>>> >>> passed devices, so if you have more than 3 storage partitions you will
>>> >>> want
>>> >>> to use SCSI (sdX) for them, but SCSI drives are not bootable.  Hence
>>> >>> my
>>> >>> configuration has "hda" for the boot drive (lv partition), and sdX for
>>> >>> all
>>> >>> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d,
>>> >>> etc).
>>> >>>
>>> >>> **********
>>> >>>
>>> >>> Hope that helps a bit, let me know if you have any other questions or
>>> >>> if
>>> >>> that didn't answer them correct.
>>> >>>
>>> >>> ~Casey
>>> >>>
>>> >>>
>>> >>> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof [at] gmail>
>>> >>> wrote:
>>> >>>>
>>> >>>> Hello Casey,
>>> >>>>
>>> >>>> Quick question!
>>> >>>>
>>> >>>> What's the config file entry for the LVM-type setup you have going on
>>> >>>> for the guest disk look like?  Might you be able to point me to a
>>> >>>> guide that'll show me how to set up a disk like that?
>>> >>>>
>>> >>>> Thanks!
>>> >>>>
>>> >>>> -Andrew Bobulsky
>>> >>>>
>>> >>>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme [at] gmail>
>>> >>>> wrote:
>>> >>>> > Hello Peter,
>>> >>>> >
>>> >>>> >
>>> >>>> > Question #1: Performance
>>> >>>> >
>>> >>>> > With x86 Virtualization hardware such as CPU's and Memory are
>>> >>>> > mapped
>>> >>>> > not
>>> >>>> > layered, there should be almost no difference in speeds from
>>> >>>> > running
>>> >>>> > natively.
>>> >>>> >
>>> >>>> > I am running Windows 7 HVM with an ATI Radeon 6870.  My system has
>>> >>>> > 12GB of
>>> >>>> > RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of
>>> >>>> > memory,
>>> >>>> > Windows
>>> >>>> > Experience index gives me 7.5 for CPU and 7.6 for RAM.  With VGA
>>> >>>> > Passthrough
>>> >>>> > I have 7.8 for both graphics scores.  I am running all my systems
>>> >>>> > on
>>> >>>> > LVM
>>> >>>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows
>>> >>>> > scored
>>> >>>> > 6.2
>>> >>>> > for HDD speeds, with PV drivers it jumped to 7.8.
>>> >>>> >
>>> >>>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>>> >>>> > create a
>>> >>>> > demo video of my system when I get some time (busy with college).
>>> >>>> >
>>> >>>> > My biggest concern right now is Disk IO ranges from excellent to
>>> >>>> > abysmal,
>>> >>>> > but I have a feeling the displayed values and actual speeds might
>>> >>>> > be
>>> >>>> > different.  I'll put putting together an extensive test with this
>>> >>>> > later, but
>>> >>>> > let's just say IO speeds vary (even with PV drivers).  The Disk IO
>>> >>>> > does not
>>> >>>> > appear to have any affect on games from my experience, so it may
>>> >>>> > only
>>> >>>> > be
>>> >>>> > write speeds.  I have not run any disk benchmarks.
>>> >>>> >
>>> >>>> >
>>> >>>> > Question #2: GPU Assignment
>>> >>>> >
>>> >>>> > I have no idea how Dual GPU cards work, so I can't really answer
>>> >>>> > this
>>> >>>> > question.
>>> >>>> >
>>> >>>> > I can advise you to be on the lookout for motherboards with NF200
>>> >>>> > chipsets
>>> >>>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great
>>> >>>> > bought but
>>> >>>> > NF200 is completely incompatible with VT-d, ended up with only one
>>> >>>> > PCIe slot
>>> >>>> > to pass.  I can recommend the ASRock Extreme4 Gen3, got it right
>>> >>>> > now,
>>> >>>> > if I
>>> >>>> > had enough money to buy a bigger PSU and a second GPU I would be
>>> >>>> > doing
>>> >>>> > what
>>> >>>> > you are planning to.
>>> >>>> >
>>> >>>> >
>>> >>>> > Question #3:  Configuration
>>> >>>> >
>>> >>>> > Two approaches to device connection, USB Passthrough and PCI
>>> >>>> > Passthrough.  I
>>> >>>> > haven't tried USB Passthrough, but I have a feeling it wouldn't
>>> >>>> > work
>>> >>>> > with
>>> >>>> > complex devices that require OS drives, such as BlueTooth receivers
>>> >>>> > or
>>> >>>> > an
>>> >>>> > XBox 360 Wireless adapter.
>>> >>>> >
>>> >>>> > I took the second approach of passing the USB Controller, but this
>>> >>>> > will vary
>>> >>>> > by hardware.  The ASRock Extreme4 Gen3 has four USB PCI
>>> >>>> > Controllers, I
>>> >>>> > don't
>>> >>>> > have any idea how you would check this stuff from their manuals, I
>>> >>>> > found out
>>> >>>> > when I ran "lspci" from Linux Dom0.
>>> >>>> >
>>> >>>> > I had no luck with USB 3.0, many devices weren't functional when
>>> >>>> > connected
>>> >>>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all
>>> >>>> > my
>>> >>>> > USB 2.0
>>> >>>> > ports.
>>> >>>> >
>>> >>>> > Again hardware specific, one of the bus had 4 ports, the other had
>>> >>>> > only two,
>>> >>>> > I bought a 4 port USB PCI plate and attached the additional USB
>>> >>>> > pins
>>> >>>> > from
>>> >>>> > the board to turn the 2-port into a 6-port controller.
>>> >>>> >
>>> >>>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>>> >>>> > everything else functions great.  With PCI Passed USB I am able to
>>> >>>> > use
>>> >>>> > an
>>> >>>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different
>>> >>>> > areas
>>> >>>> > of
>>> >>>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a
>>> >>>> > Logitech
>>> >>>> > Wireless Mouse.  I had BlueTooth but I got rid of it, the device
>>> >>>> > itself went
>>> >>>> > bad and was causing my system to BlueScreen.
>>> >>>> >
>>> >>>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>>> >>>> > Logitech
>>> >>>> > C910 webcam, and various devices when connected failed to function
>>> >>>> > right.
>>> >>>> >
>>> >>>> >
>>> >>>> > Question #4:  Other?
>>> >>>> >
>>> >>>> > I am 100% certain you could get a system running 2 Windows 7 HVM's
>>> >>>> > up
>>> >>>> > for
>>> >>>> > gaming, but you may need to daisy chain some USB devices if you
>>> >>>> > want
>>> >>>> > more
>>> >>>> > than just a keyboard and mouse for each.
>>> >>>> >
>>> >>>> > Also, if you are not confident in your ability to work with *nix, I
>>> >>>> > wouldn't
>>> >>>> > advise it.  I had spent two years tinkering with Web Servers in
>>> >>>> > Debian, so I
>>> >>>> > thought I would have an easy time of things.
>>> >>>> >
>>> >>>> > I tried it on a week off, ended up taking me 2 months to complete
>>> >>>> > my
>>> >>>> > setup.
>>> >>>> >  The results are spectacular, but be prepared to spend many hours
>>> >>>> > debugging
>>> >>>> > unless you find a really good guide.
>>> >>>> >
>>> >>>> > I would recommend going for a Two Windows on One Rig, and duplicate
>>> >>>> > that rig
>>> >>>> > for a second machine, and I recommend that for two reasons.  If you
>>> >>>> > are
>>> >>>> > successful with the first machine, you can easily copy the process.
>>> >>>> >  This
>>> >>>> > will save you hours of attempting to get a whole four Gaming
>>> >>>> > machines
>>> >>>> > working on one system.
>>> >>>> >
>>> >>>> >
>>> >>>> > As stated, I only run one gaming machine, but I do have two other
>>> >>>> > HVM's
>>> >>>> > running, one manages my households network and the other is a
>>> >>>> > private
>>> >>>> > web/file server.  So, performance wise Xen can do a lot.
>>> >>>> >
>>> >>>> > Best of luck,
>>> >>>> >
>>> >>>> > ~Casey
>>> >>>> >
>>> >>>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>>> >>>> > <peter.vandendriessche [at] gmail> wrote:
>>> >>>> >>
>>> >>>> >> Hi,
>>> >>>> >>
>>> >>>> >> I am new to Xen and I was wondering if the following construction
>>> >>>> >> would be
>>> >>>> >> feasible with the current Xen.
>>> >>>> >>
>>> >>>> >> I would like to put 2/3/4 new computers in my house, mainly for
>>> >>>> >> gaming.
>>> >>>> >> Instead of buying 2/3/4 different computers, I was thinking of
>>> >>>> >> building one
>>> >>>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>>> >>>> >> attach
>>> >>>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run
>>> >>>> >> VGA
>>> >>>> >> passthrough. This would save me money on hardware, and it would
>>> >>>> >> also
>>> >>>> >> save
>>> >>>> >> quite some space on the desk where I wanted to put them.
>>> >>>> >>
>>> >>>> >> If this is possible, I have a few additional questions about this:
>>> >>>> >>
>>> >>>> >> 1) Would the speed on each virtual machine be effectively that of
>>> >>>> >> a
>>> >>>> >> 2-core
>>> >>>> >> CPU with 1 GPU? What about memory speed/latency?
>>> >>>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x
>>> >>>> >> Radeon HD
>>> >>>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>>> >>>> >> 3) How should one configure the machine such that each OS receives
>>> >>>> >> only
>>> >>>> >> the input from its own keyboard/mouse?
>>> >>>> >> 4) Any other problems or concerns that you can think of?
>>> >>>> >>
>>> >>>> >> Thanks in advance,
>>> >>>> >> Peter
>>> >>>> >>
>>> >>>> >>
>>> >>>> >> _______________________________________________
>>> >>>> >> Xen-users mailing list
>>> >>>> >> Xen-users [at] lists
>>> >>>> >> http://lists.xen.org/xen-users
>>> >>>> >
>>> >>>> >
>>> >>>> >
>>> >>>> > _______________________________________________
>>> >>>> > Xen-users mailing list
>>> >>>> > Xen-users [at] lists
>>> >>>> > http://lists.xen.org/xen-users
>>> >>>
>>> >>>
>>> >>>
>>> >>> _______________________________________________
>>> >>> Xen-users mailing list
>>> >>> Xen-users [at] lists
>>> >>> http://lists.xen.org/xen-users
>>> >>
>>> >>
>>> >
>>
>>
Attachments: Capture.PNG (134 KB)


rulerof at gmail

May 12, 2012, 1:29 PM

Post #13 of 33 (2293 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Ah yes, those USB devices are from my passthrough controllers :)

In the interests of getting this thing up and running, I'm going to
omit the GPLPV drivers for now. It might be worthwhile to see if that
fellow over at AMD, Wei, could light a fire under some folks to debug
this seeming conflict between GPLPV and AMD Radeon drivers. I say
"seeming" because I admit it's possible that I've got a classic case
of "you're doing it wrong." Heh.

I'm really intrigued to start reading your WIP guide. If you'd like
to make a collaborative effort of it, by all means count me in! It
might even be worthwhile to package this whole thing at some point...
Gotta love open source :)

I'll be hacking at this for a few more hours today. I'll post back
with any results I manage to come across!

Cheers,
Andrew Bobulsky


On Sat, May 12, 2012 at 4:23 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>
>
> Yes, that is normal from my experience.
>
> I don't have the USB devices but that might that you haven't installed the
> USB Drivers from your Motherboard Maker.
>
> I do have an Unknown Device and Xen PCI Device #0 though, in addition to an
> Unknown SCSI Device.
>
> I've been using my system for a week with PV drivers, short of occasionally
> crappy Disk IO, I haven't had bluescreens or problems with everyday use
> (ranging from gaming, multimedia playback, and even work).
>
> It's awesome that more people are getting into this, because it would be
> really helpful to have a solid guide written for people who are brand new to
> Xen but looking for the same sort of setup.
>
>
> I began writing a very verbose guide for users somewhat new to linux, with
> screenshots and everything.  Attached is a PDF copy of what I had written as
> of a two months ago.  Keep in mind that it is over a months worth of testing
> out of date, not to mention incomplete.  However you may find some of the
> contents helpful, and are welcome to use it if you intend to add your own
> material.
>
> I am a week away from finals in college, so I haven't had a lot of time to
> finish up all the changes.  Once summer starts I had planned to finish the
> guide and post functional videos of everything.
>
> ~Casey
>
> On Sat, May 12, 2012 at 4:06 PM, Andrew Bobulsky <rulerof [at] gmail> wrote:
>>
>> Sorry buy another question if you could comment, Casey:
>>
>> Is this normal when using GPLPV? (screenshot attached)
>>
>> I'd love to actually get this running solidly, and help to write that
>> guide of yours ;)
>>
>> Cheers,
>> Andrew Bobulsky
>>
>>
>> On Sat, May 12, 2012 at 4:00 PM, Andrew Bobulsky <rulerof [at] gmail>
>> wrote:
>> > Welp, even with the FLR reset, both by restarting the system and safe
>> > removal... still get the atikmpag.sys bsod :(
>> >
>> > Gonna try removing the drivers, removing GPLPV, and doing it in reverse.
>> >
>> > Let's see what we get! :)
>> >
>> > On Sat, May 12, 2012 at 2:43 PM, Casey DeLorme <cdelorme [at] gmail>
>> > wrote:
>> >> More specifically the "first time you initialize the GPU".  It's not
>> >> any of
>> >> the operating systems, it is the card itself not getting reset.  It
>> >> appears
>> >> to only be a Windows problem, because nobody has reported this issue
>> >> when
>> >> running say Ubuntu with a passed GPU.
>> >>
>> >> So if you have a bunch of HVM's, your Windows can be given ID 100 and
>> >> still
>> >> work, provided it is the first time you have used the GPU (hence first
>> >> boot
>> >> of Windows).
>> >>
>> >> I am glad I could help, and if you have some extra storage space I
>> >> recommend
>> >> using "dd" and a second LV partition to copy a working backup of
>> >> Windows
>> >> post-install before experimenting.  It can save you some time/effort.
>> >>
>> >> ~Casey
>> >>
>> >> On Sat, May 12, 2012 at 2:28 PM, Andrew Bobulsky <rulerof [at] gmail>
>> >> wrote:
>> >>>
>> >>> Casey,
>> >>>
>> >>> Wow:
>> >>>
>> >>> > B)  When you install your ATI drivers, you need to do so on first
>> >>> > boot
>> >>> > so
>> >>> > the card is fresh.  If you reboot Windows and not the whole machine
>> >>> > before
>> >>> > trying to install the ATI drivers, the card hasn't been "reset" and
>> >>> > either
>> >>> > the installation will BSOD or if you are successful the drivers are
>> >>> > almost
>> >>> > certainly bugged and you will have problems in the future.  My
>> >>> > solution,
>> >>> > reboot Xen before installing ATI drivers.  OR!  Use the USB Safe
>> >>> > Device
>> >>> > removal and then install them.
>> >>> >
>> >>> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
>> >>> > drivers, reboot the entire computer (Xen), and then try again.
>> >>>
>> >>> My first instinct on reading that was to literally facepalm myself.
>> >>> Thank heavens I wear glasses.  :D
>> >>>
>> >>> So Basically (or perhaps, "in essence") the drivers need to be
>> >>> installed when the ID of the DomU is 1.  Fresh boot of Xen, first
>> >>> post-Xen boot of the DomU with the device attached.  Gonna try that
>> >>> now :)
>> >>>
>> >>> I DO recall the FLR thing you mentioned.  Haven't run into that yet
>> >>> because I haven't run into a successful install of the drivers :D
>> >>>
>> >>> Thank you so much.  I most certainly would have screwed it up again I
>> >>> think!  I'm on round 3 of Windows installation.  GPLPV is installed,
>> >>> so let's see how this goes...
>> >>>
>> >>> Cheers,
>> >>> Andrew Bobulsky
>> >>>
>> >>> On Sat, May 12, 2012 at 2:19 PM, Casey DeLorme <cdelorme [at] gmail>
>> >>> wrote:
>> >>> > Andrew,
>> >>> >
>> >>> > I hate that error with a passion, but the good news is I may have
>> >>> > figured
>> >>> > out exactly what causes it thanks to hundreds of encounters and some
>> >>> > insight
>> >>> > from Tobias Geiger's posts on VGA Performance Degradation.
>> >>> >
>> >>> > First, it isn't the GPLPV drivers, it's your ATI card/drivers.
>> >>> >
>> >>> > You may have noticed that the first boot of your system your ATI
>> >>> > card
>> >>> > performs optimally in Windows, well when you reboot windows and not
>> >>> > the
>> >>> > whole Xen system, the GPU does not get reset.
>> >>> >
>> >>> > It has been speculated that this is an FLR bug or perhaps more
>> >>> > specifically
>> >>> > a Windows FLR bug.
>> >>> >
>> >>> > The solution, at boot time go to the USB Safe Ejection option, and
>> >>> > eject
>> >>> > the
>> >>> > card.  Your screen goes black for 1-3 seconds and it automatically
>> >>> > reinstalls.  This is essentially a forced FLR, and will fix the
>> >>> > performance
>> >>> > issues... at least until you reboot windows again.
>> >>> >
>> >>> >
>> >>> >
>> >>> > My Solution(s) to Atikmpag.sys errors:
>> >>> >
>> >>> > I encountered this bug in two very specific instances.
>> >>> >
>> >>> > A)  If I was using a buggy device, in my case my BlueTooth adapter
>> >>> > was
>> >>> > dying
>> >>> > and I didn't realize it until over a week of failed testing.  The
>> >>> > buggy
>> >>> > BlueTooth device was causing ATI's drivers to freak, how they are
>> >>> > related is
>> >>> > beyond me.  In conclusion, try unplugging any extra devices when
>> >>> > testing.
>> >>> >
>> >>> > B)  When you install your ATI drivers, you need to do so on first
>> >>> > boot
>> >>> > so
>> >>> > the card is fresh.  If you reboot Windows and not the whole machine
>> >>> > before
>> >>> > trying to install the ATI drivers, the card hasn't been "reset" and
>> >>> > either
>> >>> > the installation will BSOD or if you are successful the drivers are
>> >>> > almost
>> >>> > certainly bugged and you will have problems in the future.  My
>> >>> > solution,
>> >>> > reboot Xen before installing ATI drivers.  OR!  Use the USB Safe
>> >>> > Device
>> >>> > removal and then install them.
>> >>> >
>> >>> >
>> >>> > To fix your BSOD you may have to safe mode reboot, uninstall the ATI
>> >>> > drivers, reboot the entire computer (Xen), and then try again.
>> >>> >
>> >>> >
>> >>> > Also, if you install the Windows Update ATI drivers, you're
>> >>> > essentially
>> >>> > screwed since it will automatically reinstall them every boot, which
>> >>> > means
>> >>> > before you can eject the device to force FLR.  The only workaround I
>> >>> > have
>> >>> > found for this is to reinstall Windows.  If anyone knows how to tell
>> >>> > Windows
>> >>> > to "really" delete an installed driver that would be fabulous, but
>> >>> > just
>> >>> > the
>> >>> > checkbox on device uninstall doesn't do it when you install the
>> >>> > Windows
>> >>> > Update driver.
>> >>> >
>> >>> > Hope that helps with a few things, let me know if I wasn't clear
>> >>> > (It's a
>> >>> > confusing topic to begin with).
>> >>> >
>> >>> > ~Casey
>> >>> >
>> >>> > On Sat, May 12, 2012 at 2:10 PM, chris <tknchris [at] gmail> wrote:
>> >>> >>
>> >>> >> kpartx being one of them! awesome tool for lvm backed domU's
>> >>> >>
>> >>> >>
>> >>> >> On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme [at] gmail>
>> >>> >> wrote:
>> >>> >>>
>> >>> >>> Hi Andrew,
>> >>> >>>
>> >>> >>> You mean the Windows DomU configuration, right?  I put it up on
>> >>> >>> pastebin
>> >>> >>> here along with a couple other configuration files:
>> >>> >>> http://pastebin.com/9E1g1BHf
>> >>> >>>
>> >>> >>> I'm just using normal LV partitions and passing them to an HVM,
>> >>> >>> there
>> >>> >>> is
>> >>> >>> no special trick so any LVM guide should put you on the right
>> >>> >>> track.
>> >>> >>>
>> >>> >>> I named my SSD VG "xen" so my drives are all found at
>> >>> >>> /dev/xen/lvname.
>> >>> >>>
>> >>> >>> **********
>> >>> >>>
>> >>> >>> The only convoluted part is my Dom0 installation, since I used EFI
>> >>> >>> boot
>> >>> >>> and an LV to store root (/), so I have two 256MB partitions, one
>> >>> >>> FAT32
>> >>> >>> for
>> >>> >>> EFI, one Ext4 for boot (/boot) and then the rest of the disk to
>> >>> >>> LVM.
>> >>> >>>  I did
>> >>> >>> the LVM setup right in the installation, added the SSD partition
>> >>> >>> (PV)
>> >>> >>> to a
>> >>> >>> Volume Group (VG) then threw on a few partitions.
>> >>> >>>
>> >>> >>> I created a linux root partition of 8GB, a home partition of 20GB,
>> >>> >>> and
>> >>> >>> a
>> >>> >>> swap partition of 2GB.  I mapped those in the configuration, then
>> >>> >>> I
>> >>> >>> went on
>> >>> >>> ahead and made a 140GB partition for windows, and two 4GB
>> >>> >>> partitions
>> >>> >>> for
>> >>> >>> PFSense and NGinx.
>> >>> >>>
>> >>> >>> Once the system is installed, the standard LVM tools can be used,
>> >>> >>> lvcreate, lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>> >>> >>>
>> >>> >>> My Disk IO is not optimal, which might be because I run four
>> >>> >>> systems
>> >>> >>> off
>> >>> >>> the same drive at the same time, so if you intend to use many
>> >>> >>> systems
>> >>> >>> you
>> >>> >>> may want to split the drives onto multiple physical disks.
>> >>> >>>  However, I
>> >>> >>> have
>> >>> >>> reason to believe my IO problems are a Xen bug, I just haven't had
>> >>> >>> time to
>> >>> >>> test/prove it.
>> >>> >>>
>> >>> >>> **********
>> >>> >>>
>> >>> >>> When you pass a LV to an HVM it treats it like a physical disk,
>> >>> >>> and it
>> >>> >>> will create a partition table, MBR code, and partitions inside the
>> >>> >>> LV
>> >>> >>> (partitions within partitions).
>> >>> >>>
>> >>> >>> When I get some free time I want to write up a pretty verbose
>> >>> >>> guide on
>> >>> >>> LVM specifically for Xen, there are plenty of things I've learned
>> >>> >>> about
>> >>> >>> accessing the partitions too.
>> >>> >>>
>> >>> >>> Some things I learned recently with Xen, IDE drives (hdX) only
>> >>> >>> allow
>> >>> >>> four
>> >>> >>> passed devices, so if you have more than 3 storage partitions you
>> >>> >>> will
>> >>> >>> want
>> >>> >>> to use SCSI (sdX) for them, but SCSI drives are not bootable.
>> >>> >>>  Hence
>> >>> >>> my
>> >>> >>> configuration has "hda" for the boot drive (lv partition), and sdX
>> >>> >>> for
>> >>> >>> all
>> >>> >>> storage drives (lv partitons) (X = alphabetical increment, a, b,
>> >>> >>> c, d,
>> >>> >>> etc).
>> >>> >>>
>> >>> >>> **********
>> >>> >>>
>> >>> >>> Hope that helps a bit, let me know if you have any other questions
>> >>> >>> or
>> >>> >>> if
>> >>> >>> that didn't answer them correct.
>> >>> >>>
>> >>> >>> ~Casey
>> >>> >>>
>> >>> >>>
>> >>> >>> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky
>> >>> >>> <rulerof [at] gmail>
>> >>> >>> wrote:
>> >>> >>>>
>> >>> >>>> Hello Casey,
>> >>> >>>>
>> >>> >>>> Quick question!
>> >>> >>>>
>> >>> >>>> What's the config file entry for the LVM-type setup you have
>> >>> >>>> going on
>> >>> >>>> for the guest disk look like?  Might you be able to point me to a
>> >>> >>>> guide that'll show me how to set up a disk like that?
>> >>> >>>>
>> >>> >>>> Thanks!
>> >>> >>>>
>> >>> >>>> -Andrew Bobulsky
>> >>> >>>>
>> >>> >>>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme
>> >>> >>>> <cdelorme [at] gmail>
>> >>> >>>> wrote:
>> >>> >>>> > Hello Peter,
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> > Question #1: Performance
>> >>> >>>> >
>> >>> >>>> > With x86 Virtualization hardware such as CPU's and Memory are
>> >>> >>>> > mapped
>> >>> >>>> > not
>> >>> >>>> > layered, there should be almost no difference in speeds from
>> >>> >>>> > running
>> >>> >>>> > natively.
>> >>> >>>> >
>> >>> >>>> > I am running Windows 7 HVM with an ATI Radeon 6870.  My system
>> >>> >>>> > has
>> >>> >>>> > 12GB of
>> >>> >>>> > RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of
>> >>> >>>> > memory,
>> >>> >>>> > Windows
>> >>> >>>> > Experience index gives me 7.5 for CPU and 7.6 for RAM.  With
>> >>> >>>> > VGA
>> >>> >>>> > Passthrough
>> >>> >>>> > I have 7.8 for both graphics scores.  I am running all my
>> >>> >>>> > systems
>> >>> >>>> > on
>> >>> >>>> > LVM
>> >>> >>>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows
>> >>> >>>> > scored
>> >>> >>>> > 6.2
>> >>> >>>> > for HDD speeds, with PV drivers it jumped to 7.8.
>> >>> >>>> >
>> >>> >>>> > Scores aside, performance with CPU/RAM is excellent, I am
>> >>> >>>> > hoping to
>> >>> >>>> > create a
>> >>> >>>> > demo video of my system when I get some time (busy with
>> >>> >>>> > college).
>> >>> >>>> >
>> >>> >>>> > My biggest concern right now is Disk IO ranges from excellent
>> >>> >>>> > to
>> >>> >>>> > abysmal,
>> >>> >>>> > but I have a feeling the displayed values and actual speeds
>> >>> >>>> > might
>> >>> >>>> > be
>> >>> >>>> > different.  I'll put putting together an extensive test with
>> >>> >>>> > this
>> >>> >>>> > later, but
>> >>> >>>> > let's just say IO speeds vary (even with PV drivers).  The Disk
>> >>> >>>> > IO
>> >>> >>>> > does not
>> >>> >>>> > appear to have any affect on games from my experience, so it
>> >>> >>>> > may
>> >>> >>>> > only
>> >>> >>>> > be
>> >>> >>>> > write speeds.  I have not run any disk benchmarks.
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> > Question #2: GPU Assignment
>> >>> >>>> >
>> >>> >>>> > I have no idea how Dual GPU cards work, so I can't really
>> >>> >>>> > answer
>> >>> >>>> > this
>> >>> >>>> > question.
>> >>> >>>> >
>> >>> >>>> > I can advise you to be on the lookout for motherboards with
>> >>> >>>> > NF200
>> >>> >>>> > chipsets
>> >>> >>>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3,
>> >>> >>>> > great
>> >>> >>>> > bought but
>> >>> >>>> > NF200 is completely incompatible with VT-d, ended up with only
>> >>> >>>> > one
>> >>> >>>> > PCIe slot
>> >>> >>>> > to pass.  I can recommend the ASRock Extreme4 Gen3, got it
>> >>> >>>> > right
>> >>> >>>> > now,
>> >>> >>>> > if I
>> >>> >>>> > had enough money to buy a bigger PSU and a second GPU I would
>> >>> >>>> > be
>> >>> >>>> > doing
>> >>> >>>> > what
>> >>> >>>> > you are planning to.
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> > Question #3:  Configuration
>> >>> >>>> >
>> >>> >>>> > Two approaches to device connection, USB Passthrough and PCI
>> >>> >>>> > Passthrough.  I
>> >>> >>>> > haven't tried USB Passthrough, but I have a feeling it wouldn't
>> >>> >>>> > work
>> >>> >>>> > with
>> >>> >>>> > complex devices that require OS drives, such as BlueTooth
>> >>> >>>> > receivers
>> >>> >>>> > or
>> >>> >>>> > an
>> >>> >>>> > XBox 360 Wireless adapter.
>> >>> >>>> >
>> >>> >>>> > I took the second approach of passing the USB Controller, but
>> >>> >>>> > this
>> >>> >>>> > will vary
>> >>> >>>> > by hardware.  The ASRock Extreme4 Gen3 has four USB PCI
>> >>> >>>> > Controllers, I
>> >>> >>>> > don't
>> >>> >>>> > have any idea how you would check this stuff from their
>> >>> >>>> > manuals, I
>> >>> >>>> > found out
>> >>> >>>> > when I ran "lspci" from Linux Dom0.
>> >>> >>>> >
>> >>> >>>> > I had no luck with USB 3.0, many devices weren't functional
>> >>> >>>> > when
>> >>> >>>> > connected
>> >>> >>>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed
>> >>> >>>> > all
>> >>> >>>> > my
>> >>> >>>> > USB 2.0
>> >>> >>>> > ports.
>> >>> >>>> >
>> >>> >>>> > Again hardware specific, one of the bus had 4 ports, the other
>> >>> >>>> > had
>> >>> >>>> > only two,
>> >>> >>>> > I bought a 4 port USB PCI plate and attached the additional USB
>> >>> >>>> > pins
>> >>> >>>> > from
>> >>> >>>> > the board to turn the 2-port into a 6-port controller.
>> >>> >>>> >
>> >>> >>>> > I use a ton of USB devices on my Windows system, Disk IO blows,
>> >>> >>>> > but
>> >>> >>>> > everything else functions great.  With PCI Passed USB I am able
>> >>> >>>> > to
>> >>> >>>> > use
>> >>> >>>> > an
>> >>> >>>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in
>> >>> >>>> > different
>> >>> >>>> > areas
>> >>> >>>> > of
>> >>> >>>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a
>> >>> >>>> > Logitech
>> >>> >>>> > Wireless Mouse.  I had BlueTooth but I got rid of it, the
>> >>> >>>> > device
>> >>> >>>> > itself went
>> >>> >>>> > bad and was causing my system to BlueScreen.
>> >>> >>>> >
>> >>> >>>> > When I tested USB 3.0, I got no video from my Happauge HD PVR
>> >>> >>>> > or my
>> >>> >>>> > Logitech
>> >>> >>>> > C910 webcam, and various devices when connected failed to
>> >>> >>>> > function
>> >>> >>>> > right.
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> > Question #4:  Other?
>> >>> >>>> >
>> >>> >>>> > I am 100% certain you could get a system running 2 Windows 7
>> >>> >>>> > HVM's
>> >>> >>>> > up
>> >>> >>>> > for
>> >>> >>>> > gaming, but you may need to daisy chain some USB devices if you
>> >>> >>>> > want
>> >>> >>>> > more
>> >>> >>>> > than just a keyboard and mouse for each.
>> >>> >>>> >
>> >>> >>>> > Also, if you are not confident in your ability to work with
>> >>> >>>> > *nix, I
>> >>> >>>> > wouldn't
>> >>> >>>> > advise it.  I had spent two years tinkering with Web Servers in
>> >>> >>>> > Debian, so I
>> >>> >>>> > thought I would have an easy time of things.
>> >>> >>>> >
>> >>> >>>> > I tried it on a week off, ended up taking me 2 months to
>> >>> >>>> > complete
>> >>> >>>> > my
>> >>> >>>> > setup.
>> >>> >>>> >  The results are spectacular, but be prepared to spend many
>> >>> >>>> > hours
>> >>> >>>> > debugging
>> >>> >>>> > unless you find a really good guide.
>> >>> >>>> >
>> >>> >>>> > I would recommend going for a Two Windows on One Rig, and
>> >>> >>>> > duplicate
>> >>> >>>> > that rig
>> >>> >>>> > for a second machine, and I recommend that for two reasons.  If
>> >>> >>>> > you
>> >>> >>>> > are
>> >>> >>>> > successful with the first machine, you can easily copy the
>> >>> >>>> > process.
>> >>> >>>> >  This
>> >>> >>>> > will save you hours of attempting to get a whole four Gaming
>> >>> >>>> > machines
>> >>> >>>> > working on one system.
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> > As stated, I only run one gaming machine, but I do have two
>> >>> >>>> > other
>> >>> >>>> > HVM's
>> >>> >>>> > running, one manages my households network and the other is a
>> >>> >>>> > private
>> >>> >>>> > web/file server.  So, performance wise Xen can do a lot.
>> >>> >>>> >
>> >>> >>>> > Best of luck,
>> >>> >>>> >
>> >>> >>>> > ~Casey
>> >>> >>>> >
>> >>> >>>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>> >>> >>>> > <peter.vandendriessche [at] gmail> wrote:
>> >>> >>>> >>
>> >>> >>>> >> Hi,
>> >>> >>>> >>
>> >>> >>>> >> I am new to Xen and I was wondering if the following
>> >>> >>>> >> construction
>> >>> >>>> >> would be
>> >>> >>>> >> feasible with the current Xen.
>> >>> >>>> >>
>> >>> >>>> >> I would like to put 2/3/4 new computers in my house, mainly
>> >>> >>>> >> for
>> >>> >>>> >> gaming.
>> >>> >>>> >> Instead of buying 2/3/4 different computers, I was thinking of
>> >>> >>>> >> building one
>> >>> >>>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs,
>> >>> >>>> >> and
>> >>> >>>> >> attach
>> >>> >>>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and
>> >>> >>>> >> run
>> >>> >>>> >> VGA
>> >>> >>>> >> passthrough. This would save me money on hardware, and it
>> >>> >>>> >> would
>> >>> >>>> >> also
>> >>> >>>> >> save
>> >>> >>>> >> quite some space on the desk where I wanted to put them.
>> >>> >>>> >>
>> >>> >>>> >> If this is possible, I have a few additional questions about
>> >>> >>>> >> this:
>> >>> >>>> >>
>> >>> >>>> >> 1) Would the speed on each virtual machine be effectively that
>> >>> >>>> >> of
>> >>> >>>> >> a
>> >>> >>>> >> 2-core
>> >>> >>>> >> CPU with 1 GPU? What about memory speed/latency?
>> >>> >>>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with
>> >>> >>>> >> 2x
>> >>> >>>> >> Radeon HD
>> >>> >>>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>> >>> >>>> >> 3) How should one configure the machine such that each OS
>> >>> >>>> >> receives
>> >>> >>>> >> only
>> >>> >>>> >> the input from its own keyboard/mouse?
>> >>> >>>> >> 4) Any other problems or concerns that you can think of?
>> >>> >>>> >>
>> >>> >>>> >> Thanks in advance,
>> >>> >>>> >> Peter
>> >>> >>>> >>
>> >>> >>>> >>
>> >>> >>>> >> _______________________________________________
>> >>> >>>> >> Xen-users mailing list
>> >>> >>>> >> Xen-users [at] lists
>> >>> >>>> >> http://lists.xen.org/xen-users
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> >
>> >>> >>>> > _______________________________________________
>> >>> >>>> > Xen-users mailing list
>> >>> >>>> > Xen-users [at] lists
>> >>> >>>> > http://lists.xen.org/xen-users
>> >>> >>>
>> >>> >>>
>> >>> >>>
>> >>> >>> _______________________________________________
>> >>> >>> Xen-users mailing list
>> >>> >>> Xen-users [at] lists
>> >>> >>> http://lists.xen.org/xen-users
>> >>> >>
>> >>> >>
>> >>> >
>> >>
>> >>
>
>

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


james.harper at bendigoit

May 12, 2012, 3:37 PM

Post #14 of 33 (2270 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

>
> Sorry buy another question if you could comment, Casey:
>
> Is this normal when using GPLPV? (screenshot attached)
>

That doesn't look right.

Can you drill down into the Storage controllers, Disk drives and Network adapters and post another screenshot? I suspect you'll still be running on emulated devices and GPLPV isn't working at all. I wonder if there is a problem with interrupt sharing with your ATI card. Can select 'Resources by type' from the device manager View menu, drill down into Interrupt request, and post a screenshot of that too?

Thanks

James




_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


cdelorme at gmail

May 12, 2012, 5:48 PM

Post #15 of 33 (2268 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Attached is a screenshot of mine, along with both a disk and my network
adapter properties opened to the drivers page.

Notice "PV" in the details, pretty sure they are working.

Additionally I ran Windows Experience Index and jumped from 6.2 to 7.8 in
the Disk scoring.

On Sat, May 12, 2012 at 6:37 PM, James Harper <james.harper [at] bendigoit
> wrote:

> >
> > Sorry buy another question if you could comment, Casey:
> >
> > Is this normal when using GPLPV? (screenshot attached)
> >
>
> That doesn't look right.
>
> Can you drill down into the Storage controllers, Disk drives and Network
> adapters and post another screenshot? I suspect you'll still be running on
> emulated devices and GPLPV isn't working at all. I wonder if there is a
> problem with interrupt sharing with your ATI card. Can select 'Resources by
> type' from the device manager View menu, drill down into Interrupt request,
> and post a screenshot of that too?
>
> Thanks
>
> James
>
>
>
>
Attachments: Probably Working.png (164 KB)


james.harper at bendigoit

May 12, 2012, 7:26 PM

Post #16 of 33 (2278 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Yes that does seem to be working. The "xen pci device #0" must be the entry for the pci passthrough... I should probably filter that out. If you add another line to the veto_devices value in the HKLM\SYSTEM\CurrentControlSet\Services\XenPCI\Parameters key it should exclude it and you won't get the failed device. Try adding just 'pci' and if that doesn't work you'll need to get the exact name from xenstore-ls under devices.

If you install the debug version of gplpv there should be a heap of debug info logged in /var/log/xen/qemu-dm-<domu name>.log. If you can send that to me in the ATI-not-working case it might tell me something useful.

James

> -----Original Message-----
> From: Casey DeLorme [mailto:cdelorme [at] gmail]
> Sent: Sunday, 13 May 2012 10:49 AM
> To: James Harper
> Cc: Andrew Bobulsky; xen-users [at] lists
> Subject: Re: [Xen-users] gaming on multiple OS of the same machine?
>
>
> Attached is a screenshot of mine, along with both a disk and my network
> adapter properties opened to the drivers page.
>
> Notice "PV" in the details, pretty sure they are working.
>
> Additionally I ran Windows Experience Index and jumped from 6.2 to 7.8 in
> the Disk scoring.
>
> On Sat, May 12, 2012 at 6:37 PM, James Harper
> <james.harper [at] bendigoit> wrote:
>
>
> >
> > Sorry buy another question if you could comment, Casey:
> >
> > Is this normal when using GPLPV? (screenshot attached)
> >
>
>
> That doesn't look right.
>
> Can you drill down into the Storage controllers, Disk drives and
> Network adapters and post another screenshot? I suspect you'll still be
> running on emulated devices and GPLPV isn't working at all. I wonder if there
> is a problem with interrupt sharing with your ATI card. Can select 'Resources
> by type' from the device manager View menu, drill down into Interrupt
> request, and post a screenshot of that too?
>
> Thanks
>
> James
>
>
>
>
>


_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


peter.vandendriessche at gmail

May 13, 2012, 3:58 AM

Post #17 of 33 (2273 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Casey,

On Sat, May 12, 2012 at 12:51 AM, Casey DeLorme <cdelorme [at] gmail> wrote:

> Question #2: GPU Assignment
> I can advise you to be on the lookout for motherboards with NF200 chipsets
> or strange PCI Switches,


You mean *without* NF200, right?



> Question #4: Other?
> I would recommend going for a Two Windows on One Rig, and duplicate that
> rig for a second machine, and I recommend that for two reasons. If you are
> successful with the first machine, you can easily copy the process. This
> will save you hours of attempting to get a whole four Gaming machines
> working on one system.
>

So, you are suggesting that 4x windows on the same machine will be more
difficult then 2x windows on the same machine? Why is this?

Best regards,
Peter


peter.vandendriessche at gmail

May 13, 2012, 4:30 AM

Post #18 of 33 (2301 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

On Sat, May 12, 2012 at 12:54 AM, Andrew Bobulsky <rulerof [at] gmail> wrote:

> Hello Peter,
>
> I've done exactly this, and I can affirm that it kicks ass ;)
>

Wonderful, that's the best answer the existential question can have. :)


Make sure that you actually have the cores to give to those DomUs.
> Specifically, if you plan on making each guest a dual core machine,
> and have 4 guests, get an 8 core chip.


8 core or 8 threads? I was planning to get one of those 4core/8thread CPUs
via hyperthreading. Sufficient or not?
I read in the documentation that 2 threads are reserved for the windows
graphics anyway, so it'd have to be 4 virtual cores anyway.


> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD
> 6990 (=4 GPUs in 2 PCI-e slots)?
> Alas, no. Not because Xen or IOMMU won't allow it, but because of the
> architecture of the 6990. While the individual GPUs /can/ be split up
> from the standpoint of PCIe, all of the video outputs are hardwired to
> the "primary" GPU. So while it would work in theory, there's nowhere
> to plug in the second monitor.


So, are there any other dual GPUs that do work here? They don't have to be
high-end (low power is even preferred), but given that most graphics cards
are 2 pci-e slots high, having 2 dual cards or 4 single cards makes a HUGE
difference in motherboard options, case requirements, cooling solutions,
connectivity (pci-e wifi, pci-e usb controllers, ...) so anything that
would deliver 4 discrete GPUs via 2 PCI-e slots would be far better than
any other option.


I suggest picking up a Highpoint RocketU 1144A USB3
> controller. It provides four USB controllers on one PCIe 4x card,
> essentially giving you four different PCIe devices, one for each port,
> that can be assigned to individual VMs.
>

If the 2x dual GPU option works, then that's certainly possible. Otherwise,
I'll need really all PCI-e slots for the GPUs (used or covered). And that
is a problem in itself, as I'd want to use wifi for the networking and it
needs a PCI-e slot.


If you're still only in these
> conceptual stages of your build, I may have some suggestions for you
> if you like.
>

I am still in the conceptual stage, and I'm very much willing to listen.

Currently I'm mainly wondering how to get 4 GPUs cooled cheaply.
Watercooling is overkill (I'm not needing high graphics like HD6990 anyway,
just wanting to play games at medium-low resolutions for the coming years)
but with air cooling they will block eachother's airflow and steal the
slots for an extra wifi card or for the USB controller. So anything to get
2x dual GPU working here would be great.


Now that I think of it, you'll have the least amount of hassle by
> doing "secondary VGA passthrough," which is just assigning a video
> card to a vm as you would any other PCIe device. I'll readily admit
> that this is nowhere near as cool as primary passthrough, but it
> involves the least amount of work.
>

Where can I find information on the difference between these? Google
suggests that primary/secondary VGA passthrough is passing the
primary/secondary GPU to the VM, but that doesn't seem to make sense here...


Best regards,
Peter


cdelorme at gmail

May 13, 2012, 11:42 AM

Post #19 of 33 (2314 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hi Peter,

You are correct, I meant to type "without" the NF200 chip. I will now
explain in detail:

I checked every major manufacturers high end boards, searching for one with
the best features to price and was aiming for the most PCIe slots I could
get. Pretty much Every single board with more than 3 PCIe (x16) slots came
with some form of PCI switch. Most of these PCI switches break IOMMU in
one way or another.

On the ASRock Extreme7 Gen3 it came with two, the NF200 and the PLX
PEX8606. The NF200 is completely incompatible with IOMMU, anything sitting
behind it creates a layer of fail between your card and success.

The PLX was an entirely different form of problem, it merges device
"function" with device identifiers. If you run "lspci" you get a list of
your devices, they are identified by "bus:device.function".

The last two PCIe slots on the ASRock Extreme7 Gen3 shared functionality
with onboard components, so for example the second to last was shared with
my dual onboard LAN and the ASMedia SATA controller. When I used
xen-pciback.hide to remove just the graphics card, it removed the other two
components as well (treating them as one "device"). As a result I lost
internet and four drive ports worth of storage.

In conclusion, I've already tried to find a way to make a 3-4x gaming
machine and failed due to hardware problems, I didn't even get a chance to
run into software issues.

*************

In my opinion it would be cheaper both in time and money to buy two
computers and reproduce one setup on the other, than to try getting four
machines up on a single physical system.

A 4-core i7 with hyperthreading will be treated like 8-vcores, and while
you "could" pass two to each windows machine, you end up with nothing left
for the control OS (Dom0) or the hypervisor (Xen) itself. They would share
CPU's of course, but in my opinion you're bound to run into resource
conflicts at high load.

*************

I didn't think the dual GPU's would work, for the same reason my PLX chip
created trouble. While it has two GPU's it's probably treated as a "single
device" with multiple "functions" and you can't share a single device
between multiple machines.

So, you will need a motherboard with four distinct PCIe x16 slots that are
not tied to some PCI Switch such as the PLX or NF200 chip.

I can't say that such a board doesn't exist, but my understanding is that
no board manufacturer is producing consumer hardware specifically for
virtualization, and NF200 or PLX are beneficial to anyone running a single
OS system with multi-GPU configurations (SLI/Crossfire), which would
account for the majority of their target market.

*************

Armed with this knowledge, here is where you may run into problems:

- Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
- Sparing enough USB ports for all machines input devices
- Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips
- Will need a 6-core i7 to power all systems without potential resource
conflicts
- Encountering bugs nobody else has when you reach that 3rd or 4th HVM

If I were in your shoes, I would do two systems, others have already
mentioned success, so you'll have an easier time of getting it setup, and
you won't buy all that hardware only to run into some limitation you hadn't
planned on.

*************

I started my system with Stock air cooling and 2x 120MM fans in a cheap
mid-tower computer case. The CPU never went over 60C, the GPU doesn't
overheat either, and the ambient temperature is around 70F.

I did upgrade my stock CPU fan to a corsair H70 Core self-contained liquid
cooling system, was inexpensive and my CPU stays around 40C on average,
plus it's even quieter than it was before.

I have never run more than one GPU in my computers before, so I don't know
if there is some special magic that happens when you have two or more that
they suddenly get even hotter, but I have to imagine that not to be the
case unless you're doing some serious overclocking.

*************

The ASRock Extreme4 Gen3 does have enough PCIe slots that I could connect
three GPU's and still have space for a single-slot PCIe device, but I only
have a 650W power supply, and have no need for more than one Windows
instance.

*************

Secondary VS Primary:

Secondary cards are available after the system boots up. Primary cards are
used from the moment the system boots. Your primary card is where you will
see POST during boot-time, and the windows logo.

Secondary pass through works great for gaming, shows the display after the
machine boots without any problems, and takes literally no extra effort to
setup on your part.

Primary passthrough requires custom ATI patching, and what exists may not
work for all cards.

I began looking into Primary passthrough very recently, because I use my
machine for more than just games and ran into a problem. Software like
CAD, Photoshop, and 3D Sculpting tools use OpenGL and only work with the
primary GPU, which means they either don't run or run without GPU
acceleration (slowly).

*************

A lot to take in, but I hope my answers help a bit. If you have more
questions I'll be happy to share what knowledge I can.

~Casey

On Sun, May 13, 2012 at 7:30 AM, Peter Vandendriessche <
peter.vandendriessche [at] gmail> wrote:

> On Sat, May 12, 2012 at 12:54 AM, Andrew Bobulsky <rulerof [at] gmail>wrote:
>
>> Hello Peter,
>>
>> I've done exactly this, and I can affirm that it kicks ass ;)
>>
>
> Wonderful, that's the best answer the existential question can have. :)
>
>
> Make sure that you actually have the cores to give to those DomUs.
>> Specifically, if you plan on making each guest a dual core machine,
>> and have 4 guests, get an 8 core chip.
>
>
> 8 core or 8 threads? I was planning to get one of those 4core/8thread CPUs
> via hyperthreading. Sufficient or not?
> I read in the documentation that 2 threads are reserved for the windows
> graphics anyway, so it'd have to be 4 virtual cores anyway.
>
>
> > 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon
>> HD 6990 (=4 GPUs in 2 PCI-e slots)?
>> Alas, no. Not because Xen or IOMMU won't allow it, but because of the
>> architecture of the 6990. While the individual GPUs /can/ be split up
>> from the standpoint of PCIe, all of the video outputs are hardwired to
>> the "primary" GPU. So while it would work in theory, there's nowhere
>> to plug in the second monitor.
>
>
> So, are there any other dual GPUs that do work here? They don't have to be
> high-end (low power is even preferred), but given that most graphics cards
> are 2 pci-e slots high, having 2 dual cards or 4 single cards makes a HUGE
> difference in motherboard options, case requirements, cooling solutions,
> connectivity (pci-e wifi, pci-e usb controllers, ...) so anything that
> would deliver 4 discrete GPUs via 2 PCI-e slots would be far better than
> any other option.
>
>
> I suggest picking up a Highpoint RocketU 1144A USB3
>> controller. It provides four USB controllers on one PCIe 4x card,
>> essentially giving you four different PCIe devices, one for each port,
>> that can be assigned to individual VMs.
>>
>
> If the 2x dual GPU option works, then that's certainly possible.
> Otherwise, I'll need really all PCI-e slots for the GPUs (used or covered).
> And that is a problem in itself, as I'd want to use wifi for the networking
> and it needs a PCI-e slot.
>
>
> If you're still only in these
>> conceptual stages of your build, I may have some suggestions for you
>> if you like.
>>
>
> I am still in the conceptual stage, and I'm very much willing to listen.
>
> Currently I'm mainly wondering how to get 4 GPUs cooled cheaply.
> Watercooling is overkill (I'm not needing high graphics like HD6990 anyway,
> just wanting to play games at medium-low resolutions for the coming years)
> but with air cooling they will block eachother's airflow and steal the
> slots for an extra wifi card or for the USB controller. So anything to get
> 2x dual GPU working here would be great.
>
>
> Now that I think of it, you'll have the least amount of hassle by
>> doing "secondary VGA passthrough," which is just assigning a video
>> card to a vm as you would any other PCIe device. I'll readily admit
>> that this is nowhere near as cool as primary passthrough, but it
>> involves the least amount of work.
>>
>
> Where can I find information on the difference between these? Google
> suggests that primary/secondary VGA passthrough is passing the
> primary/secondary GPU to the VM, but that doesn't seem to make sense here...
>
>
> Best regards,
> Peter
>
> _______________________________________________
> Xen-users mailing list
> Xen-users [at] lists
> http://lists.xen.org/xen-users
>


peter.vandendriessche at gmail

May 14, 2012, 2:52 PM

Post #20 of 33 (2278 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hi Casey,

Thanks a lot for the bunch of information. Some further questions on the
PLX and the VGA passthrough though.

On Sun, May 13, 2012 at 8:42 PM, Casey DeLorme <cdelorme [at] gmail> wrote:

> I checked every major manufacturers high end boards, searching for one
> with the best features to price and was aiming for the most PCIe slots I
> could get. Pretty much Every single board with more than 3 PCIe (x16)
> slots came with some form of PCI switch. Most of these PCI switches break
> IOMMU in one way or another.
>

Yes, indeed, all of the 4-GPU-motherboards I know, have 2 PCI-e x16 slots
which split to 2 x8 each (so 4 x8 in total). Is this always a fatal
problem? Is there any easy way to find out if it will be a problem (like
from the lspci info)?


The PLX was an entirely different form of problem, it merges device
> "function" with device identifiers. If you run "lspci" you get a list of
> your devices, they are identified by "bus:device.function".
> [...]
> Armed with this knowledge, here is where you may run into problems:
> - Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
>

So, do I understand correct that it will work if and only if there is 1
"bus" per PCI-e slot?


- Sparing enough USB ports for all machines input devices


What do you mean by sparing here? On a different bus than the PCI-e slots
for the GPUs?


- Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips
>

Neither seems a problem (3GB RAM per machine or 8GB RAM chips). The price
of RAM is fairly linear in its size.


- Will need a 6-core i7 to power all systems without potential resource
> conflicts
>

Okay. That rises the price a bit, but it'd still be well worth it.


- Encountering bugs nobody else has when you reach that 3rd or 4th HVM
>

This would probably be the real problem, given that I'm new to Xen.


I have never run more than one GPU in my computers before, so I don't know
> if there is some special magic that happens when you have two or more that
> they suddenly get even hotter, but I have to imagine that not to be the
> case unless you're doing some serious overclocking.


Depends on their size. Most GPUs are 2 PCI-e slots high (and occupy two of
those metal plates on the back) and hence plugging 4 of them in leaves no
space inbetween them, which hinders their air intake. Hence the need for
watercooling the GPUs in this case.


The ASRock Extreme4 Gen3 does have enough PCIe slots that I could connect
> three GPU's and still have space for a single-slot PCIe device, but I only
> have a 650W power supply, and have no need for more than one Windows
> instance.
>

... and it has a PLX chip. Right? Or is a PLX chip not a fatal problem?


Secondary pass through works great for gaming, shows the display after the
> machine boots without any problems, and takes literally no extra effort to
> setup on your part.
>
> Primary passthrough requires custom ATI patching, and what exists may not
> work for all cards.
>
> I began looking into Primary passthrough very recently, because I use my
> machine for more than just games and ran into a problem. Software like
> CAD, Photoshop, and 3D Sculpting tools use OpenGL and only work with the
> primary GPU, which means they either don't run or run without GPU
> acceleration (slowly).
>

Hmm? I'm not following. Most games also use OpenGL, right? And why would
OpenGL not support non-primary cards? I know that OpenCL can run on any
number of GPUs, so it'd surprise me if OpenGL was different. Do you have
any link where I can read more background on this?



> A lot to take in, but I hope my answers help a bit. If you have more
> questions I'll be happy to share what knowledge I can.
>

Certainly, they're of great help. Thanks a lot!

Best regards,
Peter


peter.vandendriessche at gmail

May 15, 2012, 4:50 AM

Post #21 of 33 (2267 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Test... has the e-mail below reached the mailing list? It's not in the
archives and I got a message that something of me was rejected (saying I'm
not a member... whut?), so I'll send it again.



On Mon, May 14, 2012 at 11:52 PM, Peter Vandendriessche <
peter.vandendriessche [at] gmail> wrote:

> Hi Casey,
>
> Thanks a lot for the bunch of information. Some further questions on the
> PLX and the VGA passthrough though.
>
>
> On Sun, May 13, 2012 at 8:42 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>
>> I checked every major manufacturers high end boards, searching for one
>> with the best features to price and was aiming for the most PCIe slots I
>> could get. Pretty much Every single board with more than 3 PCIe (x16)
>> slots came with some form of PCI switch. Most of these PCI switches break
>> IOMMU in one way or another.
>>
>
> Yes, indeed, all of the 4-GPU-motherboards I know, have 2 PCI-e x16 slots
> which split to 2 x8 each (so 4 x8 in total). Is this always a fatal
> problem? Is there any easy way to find out if it will be a problem (like
> from the lspci info)?
>
>
> The PLX was an entirely different form of problem, it merges device
>> "function" with device identifiers. If you run "lspci" you get a list of
>> your devices, they are identified by "bus:device.function".
>> [...]
>>
>> Armed with this knowledge, here is where you may run into problems:
>> - Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
>>
>
> So, do I understand correct that it will work if and only if there is 1
> "bus" per PCI-e slot?
>
>
> - Sparing enough USB ports for all machines input devices
>
>
> What do you mean by sparing here? On a different bus than the PCI-e slots
> for the GPUs?
>
>
> - Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips
>>
>
> Neither seems a problem (3GB RAM per machine or 8GB RAM chips). The price
> of RAM is fairly linear in its size.
>
>
> - Will need a 6-core i7 to power all systems without potential resource
>> conflicts
>>
>
> Okay. That rises the price a bit, but it'd still be well worth it.
>
>
> - Encountering bugs nobody else has when you reach that 3rd or 4th HVM
>>
>
> This would probably be the real problem, given that I'm new to Xen.
>
>
> I have never run more than one GPU in my computers before, so I don't know
>> if there is some special magic that happens when you have two or more that
>> they suddenly get even hotter, but I have to imagine that not to be the
>> case unless you're doing some serious overclocking.
>
>
> Depends on their size. Most GPUs are 2 PCI-e slots high (and occupy two of
> those metal plates on the back) and hence plugging 4 of them in leaves no
> space inbetween them, which hinders their air intake. Hence the need for
> watercooling the GPUs in this case.
>
>
> The ASRock Extreme4 Gen3 does have enough PCIe slots that I could connect
>> three GPU's and still have space for a single-slot PCIe device, but I only
>> have a 650W power supply, and have no need for more than one Windows
>> instance.
>>
>
> ... and it has a PLX chip. Right? Or is a PLX chip not a fatal problem?
>
>
> Secondary pass through works great for gaming, shows the display after the
>> machine boots without any problems, and takes literally no extra effort to
>> setup on your part.
>>
>> Primary passthrough requires custom ATI patching, and what exists may not
>> work for all cards.
>>
>> I began looking into Primary passthrough very recently, because I use my
>> machine for more than just games and ran into a problem. Software like
>> CAD, Photoshop, and 3D Sculpting tools use OpenGL and only work with the
>> primary GPU, which means they either don't run or run without GPU
>> acceleration (slowly).
>>
>
> Hmm? I'm not following. Most games also use OpenGL, right? And why would
> OpenGL not support non-primary cards? I know that OpenCL can run on any
> number of GPUs, so it'd surprise me if OpenGL was different. Do you have
> any link where I can read more background on this?
>
>
>
>> A lot to take in, but I hope my answers help a bit. If you have more
>> questions I'll be happy to share what knowledge I can.
>>
>
> Certainly, they're of great help. Thanks a lot!
>
> Best regards,
> Peter
>


rulerof at gmail

May 15, 2012, 6:48 AM

Post #22 of 33 (2279 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Hello Peter,

Answering some of your questions inline below:

On Mon, May 14, 2012 at 5:52 PM, Peter Vandendriessche
<peter.vandendriessche [at] gmail> wrote:
> Hi Casey,
>
> Thanks a lot for the bunch of information. Some further questions on the PLX
> and the VGA passthrough though.
>
>
> On Sun, May 13, 2012 at 8:42 PM, Casey DeLorme <cdelorme [at] gmail> wrote:
>>
>> I checked every major manufacturers high end boards, searching for one
>> with the best features to price and was aiming for the most PCIe slots I
>> could get.  Pretty much Every single board with more than 3 PCIe (x16) slots
>> came with some form of PCI switch.  Most of these PCI switches break IOMMU
>> in one way or another.
>
>
> Yes, indeed, all of the 4-GPU-motherboards I know, have 2 PCI-e x16 slots
> which split to 2 x8 each (so 4 x8 in total). Is this always a fatal problem?
> Is there any easy way to find out if it will be a problem (like from the
> lspci info)?

I couldn't say for sure if there's any great way to identify what
would be a problem and what wouldn't. I don't know too much in the
way of the nitty-gritty details, but the problem with NF200 PCIe
switched boards, in particular, has something do with the fact that
they route data to the southbridge, rather than the northbridge.
Something about it makes it so that it's not quite "real" PCIe
switching, it's more analogous to PCIe "routing," if that's a fair
term to use. Instead of switching traffic on the existing PCIe bus
that's natively part of the northbridge, they "bolt on" a second PCIe
bus through some kind of trickery that prevents a solid line of
"native PCIe" from being visible to the IOMMU. IIRC, the entire
contents of the NF200 bus itself is a valid passthrough device, but
the individual devices on it are not.... which kinda defeats the
purpose, IMHO. This is all just my recollection of when I looked into
it over a year ago, so I could be wrong about why it doesn't work...
but Casey's advice is sound: stay away from NF200. :)

Which brings me to PLX...

>> The PLX was an entirely different form of problem, it merges device
>> "function" with device identifiers.  If you run "lspci" you get a list of
>> your devices, they are identified by "bus:device.function".
>> [...]
>>
>> Armed with this knowledge, here is where you may run into problems:
>> -  Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
>
>
> So, do I understand correct that it will work if and only if there is 1
> "bus" per PCI-e slot?

In my experience, PLX PCIe switches are *fantastic*. They speak
native PCIe, and all the devices that they hook onto the PCIe bus have
a clear logical path to the IOMMU. The PLX switch built into the
Radeon 6990 is the reason you could attach each GPU to a different VM
(though, as I said, the architecture of the card PAST that point is
the reason this wouldn't be of any use to you), and the PLX chip on
the HighPoint RU1144A is the reason that each port on the card can be
assigned to a different VM. Granted, while the purpose of the RU1144A
is to provide four full-bandwidth USB3 ports to a host machine, the
way that HighPoint ended up architecting that goal provided me with
exactly the piece of hardware I needed to work out my USB connectivity
crisis, a purpose I highly doubt the engineers who designed it had
even thought of... but I thank them nonetheless :)


>> -  Sparing enough USB ports for all machines input devices
>
>
> What do you mean by sparing here? On a different bus than the PCI-e slots
> for the GPUs?

Specifically, what I think he's getting at is partly what I addressed
above; In order to supply your VMs with USB, the best bet here is to
use PCIe passthrough to hook different controllers to different VMs.
It's highly likely that the architecture of your mainboard will
prohibit pulling this off when you're going to have more than two
different guest OSes which each need their own PCI -> USB adapter.
Normally, because this kind of thing basically *never* matters,
especially seeing as how cost-prohibitive it would be to facilitate it
from the standpoint of motherboard manufacturers, you'll probably not
be able to find a board that will allow you to work out the problem.
Even if you did, I highly doubt it would have four 16x PCIe slots!
Also, since this kind of information is damn near impossible to come
by without getting the hardware in hand and checking it out
yourself... it's probably a lost cause to look for the "perfect"
motherboard.

>> -  Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips
>
>
> Neither seems a problem (3GB RAM per machine or 8GB RAM chips). The price of
> RAM is fairly linear in its size.

I'll second your notion. 3.5 GB per DomU is more than enough for
pretty much any game. It'd be a different story if you were
virtualizing SQL databases or something, but games are designed to run
on systems that would probably make most enthusiasts on Xen-Users list
cry if they had to use such a machine on a daily basis :D

>> -  Will need a 6-core i7 to power all systems without potential resource
>> conflicts
>
> Okay. That rises the price a bit, but it'd still be well worth it.

At LEAST a 6 core chip. If you build the AMD route, just shovel out
for the 8 core chip instead, so you know you're golden. Dom0 won't be
doing too much, and that leaves two cores for each VM that they can
max out without even being capable of trampling eachother.

>> -  Encountering bugs nobody else has when you reach that 3rd or 4th HVM
>
>
> This would probably be the real problem, given that I'm new to Xen.

I've seen odd behavior myself. Sometimes, one of the VMs USB ports
won't work. I switch the PCIe assignments around, and it's just a
particular VM that's being fishy... nothing wrong with the actual
controller. It's been weird, but I attribute it to the fact that my
USB controller is plugged into a riser cable. I've seen devices get
funny when you do that. However, it was the only way to plug in four
GPUs AND a PCIe USB controller :P

>> I have never run more than one GPU in my computers before, so I don't know
>> if there is some special magic that happens when you have two or more that
>> they suddenly get even hotter, but I have to imagine that not to be the case
>> unless you're doing some serious overclocking.
>
>
> Depends on their size. Most GPUs are 2 PCI-e slots high (and occupy two of
> those metal plates on the back) and hence plugging 4 of them in leaves no
> space inbetween them, which hinders their air intake. Hence the need for
> watercooling the GPUs in this case.

That would probably raise your costs significantly, but it would
greatly increase your odds for success, I'd say. My biggest
limitations and largest source of problems is that my GPUs eat all 8
slots. Severely limited my routes for solving problems, and I had to
cut the plasic off of a portion of a GPU fan shroud to accomodate a
riser cable. It did end up working, though :)


>> The ASRock Extreme4 Gen3 does have enough PCIe slots that I could connect
>> three GPU's and still have space for a single-slot PCIe device, but I only
>> have a 650W power supply, and have no need for more than one Windows
>> instance.
>
>
> ... and it has a PLX chip. Right? Or is a PLX chip not a fatal problem?

I can't speak to an Intel setup, but for a board, I'd suggest the
Gigabyte GA-990FXA-UD7. I'm testing it right now, and I'm fairly
confident it will work for what you want to do. The tested and
working setup that I've built is on the MSI 890FXA-GD70. That board,
I can 100% guarantee WILL WORK for this purpose (and might require a
BIOS downgrade, YMMV). I'd recommend the newer Gigabyte board though,
just because it's newer, I suppose. Probably supports more CPUs or
something.

>> Secondary pass through works great for gaming, shows the display after the
>> machine boots without any problems, and takes literally no extra effort to
>> setup on your part.
>>
>> Primary passthrough requires custom ATI patching, and what exists may not
>> work for all cards.
>>
>> I began looking into Primary passthrough very recently, because I use my
>> machine for more than just games and ran into a problem.  Software like CAD,
>> Photoshop, and 3D Sculpting tools use OpenGL and only work with the primary
>> GPU, which means they either don't run or run without GPU acceleration
>> (slowly).
>
>
> Hmm? I'm not following. Most games also use OpenGL, right? And why would
> OpenGL not support non-primary cards? I know that OpenCL can run on any
> number of GPUs, so it'd surprise me if OpenGL was different. Do you have any
> link where I can read more background on this?

Not sure where to read more on this, but I can share a big caveat:
GPU-Accelerated video decoding DOES NOT WORK. It crashes stuff.
Hardcore. Youtube = fail... until you disable hardware decoding. Not
sure about the OpenGL thing Casey mentions, but I can confirm Direct3D
works like a charm :)

>>
>> A lot to take in, but I hope my answers help a bit.  If you have more
>> questions I'll be happy to share what knowledge I can.
>
>
> Certainly, they're of great help. Thanks a lot!
>
> Best regards,
> Peter

I have to admit I'm excited by the prospect of someone else venturing
down this path. I just hope that I can save you some repeated bouts
of banging your head against a wall ;)

If you'd like, I could send you some photos of my setup, or more
detailed hardware specs. I had to do a lot of research to build this
on a reasonable budget... because if it wouldn't have been cheaper
than multiple systems, I know I wouldn't have done it!

Cheers,
Andrew Bobulsky

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


cdelorme at gmail

May 15, 2012, 9:35 AM

Post #23 of 33 (2236 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

Sorry for the delay, I meant to reply yesterday but had a lot of trouble
writing up an appropriate answer, as I am no specialist when it comes to
hardware.


I have respect for anyone willing to invest in experimental technology, but
that said I would never recommend something if I didn't know it would work,
and work better than an existing alternative.

It is possible to get four HVM's with GPU Passthrough on one machine, but I
don't believe it is feasible. I would think it easier to try out a dual
setup first, especially if you are new to Xen, and if that works you can
reproduce it easily.

It would be a tragedy to invest in amazing hardware equivalent to the cost
of two machines only to encounter a fatal problem (whether mentioned here
or brand new).


I was new to Xen as of March, it took me two months of tinkering to get a
firm enough grasp of Xen that I managed GPU passthrough to one machine.
After a month more, I can safely say I could probably produce two to three
HVM's with passthrough on my current hardware if I wanted to go that route.


As Andrew stated, there is no "perfect" board, I looked just a couple of
months ago and failed and I looked at hundreds of boards from at least a
dozen manufacturers. It's clearly possible to setup a four machine
passthrough system, but I wouldn't consider it feasible.

If you want to save time, sanity, and possible an expensive investment, I
would stick with a more feasible setup and duplicate it for the same cost.

Given that Andrew has successfully setup what you are looking to, it would
be wise to take him up on his offer and get hardware & setup details.

*********

Intel Core i7 processors have hyperthreading, a 4 core i7 is the equivalent
of an 8-core AMD, so if Andrew has had success with an 8-core AMD CPU and
no resource issues, you might be able to get by with a 4-core i7 (a 6 core
i7 would be 12 vcores), but I would be hesitant to assign all the cores to
HVM's.

*********

Andrew is correct, stay away from NF200, it's not compatible. There are
certain required features it doesn't have, which can be checked with "lspci
-vvv", but I don't recall what the flags you would be looking for are.


PLX is a great product, I don't deny that in the least, and it is IOMMU
compatible which is awesome. However, I believe the flaw comes by design
or implementation of PLX on the board.

In my case the Extreme7 shared two PCIe slots with other components using
the PLX bridge. While NOT the case with my Extreme4, when I passed one of
those "slots" with the Extreme7 it treated the entire bridge as a single
PCI device and all components on it were passed via xen-pciback.hide. It
was an awful experience, I lost internet, various input devices, and due to
the lack of documentation on reversing xen-pciback.hide I was unable to fix
it without reinstalling.

I found lspci to be of little help identifying it, as a bridge it just acts
as a layer, so you can use tree view (lspci -tv) for details, but it won't
say whether the devices are all tied together. My guess is the PLX was
bridged to the PCIe slot, and they shared the same space, the tree on the
Extreme4 has tons of unused bridge ports in contrast to the Extreme4 which
was completely filled.

This experience also leads me to question how the Dual GPU cards will be
handled, as a single merged device or separately. Depending on PLX
"implementation" it could go either way.


In conclusion, if you get a board with PLX it will pass devices, but be
prepared (by which I mean keep regular backups as you progress) to hit a
wall or two during testing.

*********

For five machines (4x Windows and Dom0) each to have a keyboard and mouse
you need 10 USB ports. If you choose to use PCI Passthrough you will also
need five USB PCI Buses.

My Extreme4 comes with four USB PCI Buses onboard (2x USB 3.0, and 2x. USB
2.0), so in that sense you might need a spare PCI slot for a USB Bus
Expansion card.

If you choose to use USB Passthrough you won't need as many PCI Buses, but
you may not have the ability to pass complex devices that require drivers.
For example, I could not get an XBox 360 Wireless Adapter working with USB
Passthrough (PCI Passthrough it works fine).

*********

On that same note, I encountered problems with USB 3.0, I believe they were
just USB incompatibilities and not Xen or PCI Passthrough related, you
won't have any trouble with normal input devices.

My board also has a Marvel SATA controller which did not work with VT-d
(DMA READ operation fails), so I had to disable it in UEFI to prevent
lengthy and verbose errors at boot time. For me loosing two SATA ports was
not a big deal, for others this could be a deal breaker.

*********

While not a limitation per say, you may want to consider buying more hard
drives to run the virtual machines on, four machines making IO requests to
partitions on the same drive could get sluggish even with PV drivers.

*********

In line with what Andrew said, manufacturers don't list the information you
would need to verify any of thus stuff, you essentially have to buy it and
try it, or look for other peoples reported successes.

*********

In contrast to what Andrew has reported, my problem is OpenGL for
Accelerated "Graphics", oddly Accelerated Video works great. Youtube works
with the box checked just fine, and I play a large number of media files
and formats without any problems (except for h.264 10-bit video playback,
had to change to an alternative codec since Microsoft does not support
10-bit yet).

Direct3D works, I have played Borderlands and The Last Remnant at max
settings without lag. To test compatibility I also installed Final Fantasy
7 & 8 which work with related patch files. I am pretty sure the patch
files open the game and run it through an OpenGL filter of some kind, so I
have to imagine that works. While I am using an AMD/ATI card, Borderlands
installed nVidia PhysX which is also working.


The error messages I have encountered appear to be with specific
applications, more specifically anything that has to do with image editing
software. Photoshop and Blender 3D Sculpting software are the problem, I
think Blender uses a Java library. Photoshop runs but says no GPU
Acceleration, Blender doesn't open at all. Search results all say it's
because they can't be told to use a secondary graphics card.

When I searched for details about either message, they both led me to
problems with the software using a Secondary Graphics card. Perhaps I
should find it funny that games I can buy for $30 are more compatible with
secondary graphics than $400 professional image editing software.


Do keep us updated on your progress with the system, it's always nice to
have more people giving this a try.

~Casey

On Tue, May 15, 2012 at 9:48 AM, Andrew Bobulsky <rulerof [at] gmail> wrote:

> Hello Peter,
>
> Answering some of your questions inline below:
>
> On Mon, May 14, 2012 at 5:52 PM, Peter Vandendriessche
> <peter.vandendriessche [at] gmail> wrote:
> > Hi Casey,
> >
> > Thanks a lot for the bunch of information. Some further questions on the
> PLX
> > and the VGA passthrough though.
> >
> >
> > On Sun, May 13, 2012 at 8:42 PM, Casey DeLorme <cdelorme [at] gmail>
> wrote:
> >>
> >> I checked every major manufacturers high end boards, searching for one
> >> with the best features to price and was aiming for the most PCIe slots I
> >> could get. Pretty much Every single board with more than 3 PCIe (x16)
> slots
> >> came with some form of PCI switch. Most of these PCI switches break
> IOMMU
> >> in one way or another.
> >
> >
> > Yes, indeed, all of the 4-GPU-motherboards I know, have 2 PCI-e x16 slots
> > which split to 2 x8 each (so 4 x8 in total). Is this always a fatal
> problem?
> > Is there any easy way to find out if it will be a problem (like from the
> > lspci info)?
>
> I couldn't say for sure if there's any great way to identify what
> would be a problem and what wouldn't. I don't know too much in the
> way of the nitty-gritty details, but the problem with NF200 PCIe
> switched boards, in particular, has something do with the fact that
> they route data to the southbridge, rather than the northbridge.
> Something about it makes it so that it's not quite "real" PCIe
> switching, it's more analogous to PCIe "routing," if that's a fair
> term to use. Instead of switching traffic on the existing PCIe bus
> that's natively part of the northbridge, they "bolt on" a second PCIe
> bus through some kind of trickery that prevents a solid line of
> "native PCIe" from being visible to the IOMMU. IIRC, the entire
> contents of the NF200 bus itself is a valid passthrough device, but
> the individual devices on it are not.... which kinda defeats the
> purpose, IMHO. This is all just my recollection of when I looked into
> it over a year ago, so I could be wrong about why it doesn't work...
> but Casey's advice is sound: stay away from NF200. :)
>
> Which brings me to PLX...
>
> >> The PLX was an entirely different form of problem, it merges device
> >> "function" with device identifiers. If you run "lspci" you get a list
> of
> >> your devices, they are identified by "bus:device.function".
> >> [...]
> >>
> >> Armed with this knowledge, here is where you may run into problems:
> >> - Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
> >
> >
> > So, do I understand correct that it will work if and only if there is 1
> > "bus" per PCI-e slot?
>
> In my experience, PLX PCIe switches are *fantastic*. They speak
> native PCIe, and all the devices that they hook onto the PCIe bus have
> a clear logical path to the IOMMU. The PLX switch built into the
> Radeon 6990 is the reason you could attach each GPU to a different VM
> (though, as I said, the architecture of the card PAST that point is
> the reason this wouldn't be of any use to you), and the PLX chip on
> the HighPoint RU1144A is the reason that each port on the card can be
> assigned to a different VM. Granted, while the purpose of the RU1144A
> is to provide four full-bandwidth USB3 ports to a host machine, the
> way that HighPoint ended up architecting that goal provided me with
> exactly the piece of hardware I needed to work out my USB connectivity
> crisis, a purpose I highly doubt the engineers who designed it had
> even thought of... but I thank them nonetheless :)
>
>
> >> - Sparing enough USB ports for all machines input devices
> >
> >
> > What do you mean by sparing here? On a different bus than the PCI-e slots
> > for the GPUs?
>
> Specifically, what I think he's getting at is partly what I addressed
> above; In order to supply your VMs with USB, the best bet here is to
> use PCIe passthrough to hook different controllers to different VMs.
> It's highly likely that the architecture of your mainboard will
> prohibit pulling this off when you're going to have more than two
> different guest OSes which each need their own PCI -> USB adapter.
> Normally, because this kind of thing basically *never* matters,
> especially seeing as how cost-prohibitive it would be to facilitate it
> from the standpoint of motherboard manufacturers, you'll probably not
> be able to find a board that will allow you to work out the problem.
> Even if you did, I highly doubt it would have four 16x PCIe slots!
> Also, since this kind of information is damn near impossible to come
> by without getting the hardware in hand and checking it out
> yourself... it's probably a lost cause to look for the "perfect"
> motherboard.
>
> >> - Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips
> >
> >
> > Neither seems a problem (3GB RAM per machine or 8GB RAM chips). The
> price of
> > RAM is fairly linear in its size.
>
> I'll second your notion. 3.5 GB per DomU is more than enough for
> pretty much any game. It'd be a different story if you were
> virtualizing SQL databases or something, but games are designed to run
> on systems that would probably make most enthusiasts on Xen-Users list
> cry if they had to use such a machine on a daily basis :D
>
> >> - Will need a 6-core i7 to power all systems without potential resource
> >> conflicts
> >
> > Okay. That rises the price a bit, but it'd still be well worth it.
>
> At LEAST a 6 core chip. If you build the AMD route, just shovel out
> for the 8 core chip instead, so you know you're golden. Dom0 won't be
> doing too much, and that leaves two cores for each VM that they can
> max out without even being capable of trampling eachother.
>
> >> - Encountering bugs nobody else has when you reach that 3rd or 4th HVM
> >
> >
> > This would probably be the real problem, given that I'm new to Xen.
>
> I've seen odd behavior myself. Sometimes, one of the VMs USB ports
> won't work. I switch the PCIe assignments around, and it's just a
> particular VM that's being fishy... nothing wrong with the actual
> controller. It's been weird, but I attribute it to the fact that my
> USB controller is plugged into a riser cable. I've seen devices get
> funny when you do that. However, it was the only way to plug in four
> GPUs AND a PCIe USB controller :P
>
> >> I have never run more than one GPU in my computers before, so I don't
> know
> >> if there is some special magic that happens when you have two or more
> that
> >> they suddenly get even hotter, but I have to imagine that not to be the
> case
> >> unless you're doing some serious overclocking.
> >
> >
> > Depends on their size. Most GPUs are 2 PCI-e slots high (and occupy two
> of
> > those metal plates on the back) and hence plugging 4 of them in leaves no
> > space inbetween them, which hinders their air intake. Hence the need for
> > watercooling the GPUs in this case.
>
> That would probably raise your costs significantly, but it would
> greatly increase your odds for success, I'd say. My biggest
> limitations and largest source of problems is that my GPUs eat all 8
> slots. Severely limited my routes for solving problems, and I had to
> cut the plasic off of a portion of a GPU fan shroud to accomodate a
> riser cable. It did end up working, though :)
>
>
> >> The ASRock Extreme4 Gen3 does have enough PCIe slots that I could
> connect
> >> three GPU's and still have space for a single-slot PCIe device, but I
> only
> >> have a 650W power supply, and have no need for more than one Windows
> >> instance.
> >
> >
> > ... and it has a PLX chip. Right? Or is a PLX chip not a fatal problem?
>
> I can't speak to an Intel setup, but for a board, I'd suggest the
> Gigabyte GA-990FXA-UD7. I'm testing it right now, and I'm fairly
> confident it will work for what you want to do. The tested and
> working setup that I've built is on the MSI 890FXA-GD70. That board,
> I can 100% guarantee WILL WORK for this purpose (and might require a
> BIOS downgrade, YMMV). I'd recommend the newer Gigabyte board though,
> just because it's newer, I suppose. Probably supports more CPUs or
> something.
>
> >> Secondary pass through works great for gaming, shows the display after
> the
> >> machine boots without any problems, and takes literally no extra effort
> to
> >> setup on your part.
> >>
> >> Primary passthrough requires custom ATI patching, and what exists may
> not
> >> work for all cards.
> >>
> >> I began looking into Primary passthrough very recently, because I use my
> >> machine for more than just games and ran into a problem. Software like
> CAD,
> >> Photoshop, and 3D Sculpting tools use OpenGL and only work with the
> primary
> >> GPU, which means they either don't run or run without GPU acceleration
> >> (slowly).
> >
> >
> > Hmm? I'm not following. Most games also use OpenGL, right? And why would
> > OpenGL not support non-primary cards? I know that OpenCL can run on any
> > number of GPUs, so it'd surprise me if OpenGL was different. Do you have
> any
> > link where I can read more background on this?
>
> Not sure where to read more on this, but I can share a big caveat:
> GPU-Accelerated video decoding DOES NOT WORK. It crashes stuff.
> Hardcore. Youtube = fail... until you disable hardware decoding. Not
> sure about the OpenGL thing Casey mentions, but I can confirm Direct3D
> works like a charm :)
>
> >>
> >> A lot to take in, but I hope my answers help a bit. If you have more
> >> questions I'll be happy to share what knowledge I can.
> >
> >
> > Certainly, they're of great help. Thanks a lot!
> >
> > Best regards,
> > Peter
>
> I have to admit I'm excited by the prospect of someone else venturing
> down this path. I just hope that I can save you some repeated bouts
> of banging your head against a wall ;)
>
> If you'd like, I could send you some photos of my setup, or more
> detailed hardware specs. I had to do a lot of research to build this
> on a reasonable budget... because if it wouldn't have been cheaper
> than multiple systems, I know I wouldn't have done it!
>
> Cheers,
> Andrew Bobulsky
>


james.harper at bendigoit

May 15, 2012, 5:42 PM

Post #24 of 33 (2229 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

>
> Intel Core i7 processors have hyperthreading, a 4 core i7 is the equivalent of
> an 8-core AMD, so if Andrew has had success with an 8-core AMD CPU and
> no resource issues, you might be able to get by with a 4-core i7 (a 6 core i7
> would be 12 vcores), but I would be hesitant to assign all the cores to HVM's.
>

Be careful with this... a hyperthread is not equivalent to a real thread. Last time I checked, the hyperthread only ran while the main thread was stalled (waiting for memory fetch or something). Before the various schedulers (Linux/Windows) learnt about how to manage hyperthreads there were cases where performance was worse with hyperthreading turned on. Even the last server build I did for running a specific application says "make sure hyperthreading is disabled" in the setup notes.

James

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users


lists.xen at nuclearfallout

May 15, 2012, 10:17 PM

Post #25 of 33 (2215 views)
Permalink
Re: gaming on multiple OS of the same machine? [In reply to]

On 5/15/2012 5:42 PM, James Harper wrote:
> Be careful with this... a hyperthread is not equivalent to a real thread. Last time I checked, the hyperthread only ran while the main thread was stalled (waiting for memory fetch or something). Before the various schedulers (Linux/Windows) learnt about how to manage hyperthreads there were cases where performance was worse with hyperthreading turned on. Even the last server build I did for running a specific application says "make sure hyperthreading is disabled" in the setup notes

While hyperthreading is not anywhere close to adding a core, I've found
that the current versions of it is are light years beyond the old P4
implementation. That one definitely needed to be disabled, as it hurt
performance pretty badly for latency-sensitive applications, possibly
because of the problem you describe.

The current implementation seems to interleave two threads running on a
single physical core well and give them roughly the same resources, with
minimal delays involved.

Just always be aware that hyperthreads are not real cores, make sure not
to manually set affinities (so that the OS/hypervisor can schedule
running processes/vcpus onto separate physical cores when possible), and
know that when you see "50%" overall CPU usage, you're actually a lot
closer to 85%!

-John

_______________________________________________
Xen-users mailing list
Xen-users [at] lists
http://lists.xen.org/xen-users

First page Previous page 1 2 Next page Last page  View All Xen users RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.