Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: OpenStack: Operators

Can't start a vm, from dashboard

 

 

OpenStack operators RSS feed   Index | Next | Previous | View Threaded


jjpavlik at gmail

Aug 7, 2013, 11:50 AM

Post #1 of 4 (32 views)
Permalink
Can't start a vm, from dashboard

I just finished installing everything, i tried to create my first VM from
the dashboard but it doesn't work. After choosing flavor and hitting launch
it starts "creating" it but after a few seconds it stops saying: "Error:
There was an error submitting the form. Please try again.". The only place
where i found something related is in nova.log in my compute node, here is
the log:

*2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
[req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
out _error_callback
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
*
*2013-08-07 18:05:55.479 DEBUG nova.quota
[req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
'411f6f70-415e-4a21-aa06-3980070d6095',
'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
/usr/lib/python2.7/dist-packages/nova/quota.py:1012*
*2013-08-07 18:05:55.480 ERROR nova.api.openstack
[req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
RPC response.*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback (most
recent call last):*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
in __call__*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
req.get_response(self.application)*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack application,
catch_exc_info=False)*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
call_application*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack app_iter =
application(self.environ, start_response)*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
resp(environ, start_response)*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
line 450, in __call__*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
self.app(env, start_response)*
*...
*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
551, in __iter__
*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
self._iterator.next()*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
line 648, in iterconsume*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
self.ensure(_error_callback, _consume)*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
line 566, in ensure*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
error_callback(e)*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
line 629, in _error_callback*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
rpc_common.Timeout()*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout: Timeout
while waiting on RPC response.*
*2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
*2013-08-07 18:05:55.488 INFO nova.api.openstack
[req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6]
http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned
with HTTP 500
*
*2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
[req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
either erred or is incapable of performing the requested operation.
__call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165*
*2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
[req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
/v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
time: 60.5262640*

A couple of things about my deployment that may help you help me:
-One controller node running: nova-conductor, nova-scheduler, keystone,
quantum-server, rabbitmq
-One compute node running: nova-api, nova-compute, glance
-One storage node running cinder

My ideas:
-I think it could be a problem related to nova-compute using nova-conductor
(i really don't know how to tell nova to use it...), somehow messages from
nova-compute doesn't reach nova-conductor on the controller node eventhough
that nova-compute is connected to rabbit and so is nova-conductor.
-I haven't found any message like "wrong password for rabbit" on any log
file.



--
Pavlik Salles Juan José


jjpavlik at gmail

Aug 7, 2013, 12:33 PM

Post #2 of 4 (30 views)
Permalink
Re: Can't start a vm, from dashboard [In reply to]

Here i have more information, i tried to boot a vm from the CLI and it
doesn't really fail. But when i check the vms status in the dashboard it
says "Scheduling" and never changes its state to "running" or "error".


2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>

> I just finished installing everything, i tried to create my first VM from
> the dashboard but it doesn't work. After choosing flavor and hitting launch
> it starts "creating" it but after a few seconds it stops saying: "Error:
> There was an error submitting the form. Please try again.". The only place
> where i found something related is in nova.log in my compute node, here is
> the log:
>
> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
> out _error_callback
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
> *
> *2013-08-07 18:05:55.479 DEBUG nova.quota
> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
> '411f6f70-415e-4a21-aa06-3980070d6095',
> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
> RPC response.*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback (most
> recent call last):*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
> in __call__*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
> req.get_response(self.application)*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack application,
> catch_exc_info=False)*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
> call_application*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack app_iter =
> application(self.environ, start_response)*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
> resp(environ, start_response)*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
> line 450, in __call__*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
> self.app(env, start_response)*
> *...
> *
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
> 551, in __iter__
> *
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
> self._iterator.next()*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> line 648, in iterconsume*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
> self.ensure(_error_callback, _consume)*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> line 566, in ensure*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
> error_callback(e)*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
> line 629, in _error_callback*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
> rpc_common.Timeout()*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout: Timeout
> while waiting on RPC response.*
> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
> *2013-08-07 18:05:55.488 INFO nova.api.openstack
> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6]
> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
> *
> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
> either erred or is incapable of performing the requested operation.
> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165*
> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
> time: 60.5262640*
>
> A couple of things about my deployment that may help you help me:
> -One controller node running: nova-conductor, nova-scheduler, keystone,
> quantum-server, rabbitmq
> -One compute node running: nova-api, nova-compute, glance
> -One storage node running cinder
>
> My ideas:
> -I think it could be a problem related to nova-compute using
> nova-conductor (i really don't know how to tell nova to use it...), somehow
> messages from nova-compute doesn't reach nova-conductor on the controller
> node eventhough that nova-compute is connected to rabbit and so is
> nova-conductor.
> -I haven't found any message like "wrong password for rabbit" on any log
> file.
>
>
>
> --
> Pavlik Salles Juan José
>



--
Pavlik Salles Juan José


jjpavlik at gmail

Aug 7, 2013, 2:05 PM

Post #3 of 4 (30 views)
Permalink
Re: Can't start a vm, from dashboard [In reply to]

According to the doc this problem should be related to some service that
isn't answering to nova-api. I just have 3 servers in my deployment, so i
don't think this is problem related to the amount of messages in the
queues.


2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>

> Here i have more information, i tried to boot a vm from the CLI and it
> doesn't really fail. But when i check the vms status in the dashboard it
> says "Scheduling" and never changes its state to "running" or "error".
>
>
> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>
>> I just finished installing everything, i tried to create my first VM from
>> the dashboard but it doesn't work. After choosing flavor and hitting launch
>> it starts "creating" it but after a few seconds it stops saying: "Error:
>> There was an error submitting the form. Please try again.". The only place
>> where i found something related is in nova.log in my compute node, here is
>> the log:
>>
>> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
>> out _error_callback
>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
>> *
>> *2013-08-07 18:05:55.479 DEBUG nova.quota
>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
>> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
>> '411f6f70-415e-4a21-aa06-3980070d6095',
>> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
>> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
>> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
>> RPC response.*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback (most
>> recent call last):*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>> in __call__*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>> req.get_response(self.application)*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack application,
>> catch_exc_info=False)*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>> call_application*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack app_iter =
>> application(self.environ, start_response)*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>> resp(environ, start_response)*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
>> line 450, in __call__*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>> self.app(env, start_response)*
>> *...
>> *
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>> 551, in __iter__
>> *
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>> self._iterator.next()*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>> line 648, in iterconsume*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
>> self.ensure(_error_callback, _consume)*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>> line 566, in ensure*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>> error_callback(e)*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>> line 629, in _error_callback*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
>> rpc_common.Timeout()*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout: Timeout
>> while waiting on RPC response.*
>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
>> *2013-08-07 18:05:55.488 INFO nova.api.openstack
>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6]
>> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
>> *
>> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
>> either erred or is incapable of performing the requested operation.
>> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165
>> *
>> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
>> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
>> time: 60.5262640*
>>
>> A couple of things about my deployment that may help you help me:
>> -One controller node running: nova-conductor, nova-scheduler, keystone,
>> quantum-server, rabbitmq
>> -One compute node running: nova-api, nova-compute, glance
>> -One storage node running cinder
>>
>> My ideas:
>> -I think it could be a problem related to nova-compute using
>> nova-conductor (i really don't know how to tell nova to use it...), somehow
>> messages from nova-compute doesn't reach nova-conductor on the controller
>> node eventhough that nova-compute is connected to rabbit and so is
>> nova-conductor.
>> -I haven't found any message like "wrong password for rabbit" on any log
>> file.
>>
>>
>>
>> --
>> Pavlik Salles Juan José
>>
>
>
>
> --
> Pavlik Salles Juan José
>



--
Pavlik Salles Juan José


jjpavlik at gmail

Aug 7, 2013, 5:55 PM

Post #4 of 4 (30 views)
Permalink
Re: Can't start a vm, from dashboard [In reply to]

Is there any way i can test nova-conductor and nova-scheduler to be sure
they are working like they should? If i list nova-manage service list,
everything is fine. I'm running out of ideas hahaha.


2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>

> According to the doc this problem should be related to some service that
> isn't answering to nova-api. I just have 3 servers in my deployment, so i
> don't think this is problem related to the amount of messages in the
> queues.
>
>
> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>
>> Here i have more information, i tried to boot a vm from the CLI and it
>> doesn't really fail. But when i check the vms status in the dashboard it
>> says "Scheduling" and never changes its state to "running" or "error".
>>
>>
>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>
>>> I just finished installing everything, i tried to create my first VM
>>> from the dashboard but it doesn't work. After choosing flavor and hitting
>>> launch it starts "creating" it but after a few seconds it stops saying:
>>> "Error: There was an error submitting the form. Please try again.". The
>>> only place where i found something related is in nova.log in my compute
>>> node, here is the log:
>>>
>>> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
>>> out _error_callback
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
>>> *
>>> *2013-08-07 18:05:55.479 DEBUG nova.quota
>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
>>> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
>>> '411f6f70-415e-4a21-aa06-3980070d6095',
>>> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
>>> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
>>> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
>>> RPC response.*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback (most
>>> recent call last):*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>>> in __call__*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>> req.get_response(self.application)*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>> application, catch_exc_info=False)*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>>> call_application*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack app_iter =
>>> application(self.environ, start_response)*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>> resp(environ, start_response)*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
>>> line 450, in __call__*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>> self.app(env, start_response)*
>>> *...
>>> *
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>>> 551, in __iter__
>>> *
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>> self._iterator.next()*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>> line 648, in iterconsume*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
>>> self.ensure(_error_callback, _consume)*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>> line 566, in ensure*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>> error_callback(e)*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>> line 629, in _error_callback*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
>>> rpc_common.Timeout()*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout:
>>> Timeout while waiting on RPC response.*
>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
>>> *2013-08-07 18:05:55.488 INFO nova.api.openstack
>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6]
>>> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
>>> *
>>> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
>>> either erred or is incapable of performing the requested operation.
>>> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165
>>> *
>>> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
>>> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
>>> time: 60.5262640*
>>>
>>> A couple of things about my deployment that may help you help me:
>>> -One controller node running: nova-conductor, nova-scheduler, keystone,
>>> quantum-server, rabbitmq
>>> -One compute node running: nova-api, nova-compute, glance
>>> -One storage node running cinder
>>>
>>> My ideas:
>>> -I think it could be a problem related to nova-compute using
>>> nova-conductor (i really don't know how to tell nova to use it...), somehow
>>> messages from nova-compute doesn't reach nova-conductor on the controller
>>> node eventhough that nova-compute is connected to rabbit and so is
>>> nova-conductor.
>>> -I haven't found any message like "wrong password for rabbit" on any log
>>> file.
>>>
>>>
>>>
>>> --
>>> Pavlik Salles Juan José
>>>
>>
>>
>>
>> --
>> Pavlik Salles Juan José
>>
>
>
>
> --
> Pavlik Salles Juan José
>



--
Pavlik Salles Juan José

OpenStack operators RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.