Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: OpenStack: Operators

libvirtError: operation failed: failed to retrieve chardev info in qemu with 'info charde, v'

 

 

OpenStack operators RSS feed   Index | Next | Previous | View Threaded


maximilianrh at googlemail

Jul 21, 2011, 5:46 AM

Post #1 of 1 (119 views)
Permalink
libvirtError: operation failed: failed to retrieve chardev info in qemu with 'info charde, v'

Hello,

whenever I try to run an instance it is being scheduled and directly
afterwards it goes into state "shutdown". The
/var/log/nova/nova-compute.log file shows the following error:

/2011-07-21 14:31:59,846 DEBUG nova.utils [-] Running cmd
(subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=929) execute
/usr/lib/pymodules/python2.7/nova/utils.py:150/
/2011-07-21 14:32:03,033 ERROR nova.exception [-] Uncaught exception/
/(nova.exception): TRACE: Traceback (most recent call last):/
/(nova.exception): TRACE: File
"/usr/lib/pymodules/python2.7/nova/exception.py", line 120, in _wrap/
/(nova.exception): TRACE: return f(*args, **kw)/
/(nova.exception): TRACE: File
"/usr/lib/pymodules/python2.7/nova/virt/libvirt_conn.py", line 617,
in spawn/
/(nova.exception): TRACE: domain = self._create_new_domain(xml)/
/(nova.exception): TRACE: File
"/usr/lib/pymodules/python2.7/nova/virt/libvirt_conn.py", line 1079,
in _create_new_domain/
/(nova.exception): TRACE: domain.createWithFlags(launch_flags)/
/(nova.exception): TRACE: File
"/usr/lib/python2.7/dist-packages/libvirt.py", line 337, in
createWithFlags/
/(nova.exception): TRACE: if ret == -1: raise libvirtError
('virDomainCreateWithFlags() failed', dom=self)/
/(nova.exception): TRACE: libvirtError: operation failed: failed to
retrieve chardev info in qemu with 'info chardev'/
/(nova.exception): TRACE: /
/2011-07-21 14:32:03,035 ERROR nova.compute.manager
[5BFU368FD74ILICHW-N9 cloudypants wpscales] Instance '13' failed to
spawn. Is virtualization enabled in the BIOS?/
/(nova.compute.manager): TRACE: Traceback (most recent call last):/
/(nova.compute.manager): TRACE: File
"/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 234, in
run_instance/
/(nova.compute.manager): TRACE: self.driver.spawn(instance_ref)/
/(nova.compute.manager): TRACE: File
"/usr/lib/pymodules/python2.7/nova/exception.py", line 126, in _wrap/
/(nova.compute.manager): TRACE: raise Error(str(e))/
/(nova.compute.manager): TRACE: Error: operation failed: failed to
retrieve chardev info in qemu with 'info chardev'/
/(nova.compute.manager): TRACE: /

I have found the following at
https://bugs.launchpad.net/nova/+bug/702741/comments/4:

/Seems like a permissions issue. I would suspect:
a) console.log isn't readable due to being a subdirectory that nova
doesn't have permissions for (i.e. /root/...)
or
b) there is a permissions issue with the new configuration of nova
web console trying to connect to a device
(you could test this by removing the offending line from the
libvirt.xml in the instance directory)
or
c) there is some other strange issue with regards to kvm permissions/

/You may get more info by going to the instance directory and trying
to run virsh create libvirt.xml. (if it runs properly as root then
also try it as the nova user)/

/Vish/

And if I check the instance directory:

/# ls -lah /var/lib/nova/instances/instance-0000000d/
total 37M
drwxr-xr-x 2 nova nogroup 4.0K 2011-07-21 14:31 .
drwxr-xr-x 4 nova root 4.0K 2011-07-21 14:31 ..
-rw-r----- 1 root root 0 2011-07-21 14:32 console.log
-rw-r--r-- 1 root root 32M 2011-07-21 14:31 disk
-rw-r--r-- 1 root root 6.1M 2011-07-21 14:31 disk.local
-rw-r--r-- 1 root root 4.3M 2011-07-21 14:31 kernel
-rw-r--r-- 1 nova nogroup 1.8K 2011-07-21 14:31 libvirt.xml

/

I can see that console.log is not readable - at least not for the nova
user. All nova processes run under the user nova:

/s# ps aux | grep nova-
nova 889 0.0 0.0 35716 1256 ? Ss Jul20 0:00 su
-c nova-api --flagfile=/etc/nova/nova.conf nova
nova 891 0.0 0.0 35716 1256 ? Ss Jul20 0:00 su
-c nova-network --flagfile=/etc/nova/nova.conf nova
nova 892 0.0 0.0 35716 1256 ? Ss Jul20 0:00 su
-c nova-scheduler --flagfile=/etc/nova/nova.conf nova
nova 898 0.0 0.0 35716 1252 ? Ss Jul20 0:00 su
-c nova-compute --flagfile=/etc/nova/nova.conf nova
nova 926 1.5 0.9 113748 39980 ? S Jul20 23:23
/usr/bin/python /usr/bin/nova-network --flagfile=/etc/nova/nova.conf
nova 927 0.0 1.2 132124 51792 ? S Jul20 0:04
/usr/bin/python /usr/bin/nova-api --flagfile=/etc/nova/nova.conf
nova 928 1.5 0.9 112484 38852 ? S Jul20 23:32
/usr/bin/python /usr/bin/nova-scheduler --flagfile=/etc/nova/nova.conf
nova 929 1.6 1.3 328412 56572 ? Sl Jul20 25:24
/usr/bin/python /usr/bin/nova-compute --flagfile=/etc/nova/nova.conf
root 3342 0.0 0.3 61224 16132 ? S 06:25 0:00
/usr/bin/python /usr/bin/nova-objectstore --uid 107 --gid 65534
--pidfile /var/run/nova/nova-objectstore.pid
--flagfile=/etc/nova/nova.conf --nodaemon
--logfile=/var/log/nova/nova-objectstore.log/

So what I don't get is why all the files within the instance folder are
owned by root. Is this the problem?

Regards

Max

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20110721/8656639d/attachment.html>

OpenStack operators RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.