Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: OpenStack: Operators

Fwd: Can't start a vm, from dashboard

 

 

OpenStack operators RSS feed   Index | Next | Previous | View Threaded


jjpavlik at gmail

Aug 8, 2013, 5:06 PM

Post #1 of 4 (20 views)
Permalink
Fwd: Can't start a vm, from dashboard

You are right Wilson, there's no hostiD in the vm description:

+-----------------------------+-----------------------------------------------------------+
| Property | Value
|
+-----------------------------+-----------------------------------------------------------+
| status | BUILD
|
| updated | 2013-08-08T19:23:01Z
|
| OS-EXT-STS:task_state | scheduling
|
| key_name | None
|
| image | Ubuntu 12.04.2 LTS
(1359ca8d-23a2-40e8-940f-d90b3e68bb39) |
| hostId |
|
| OS-EXT-STS:vm_state | building
|
| flavor | m1.tiny (1)
|
| id | b0583cca-63c2-481f-8b94-7aeb2e86641f
|
| security_groups | [{u'name': u'default'}]
|
| user_id | 20390b639d4449c18926dca5e038ec5e
|
| name | prueba11
|
| created | 2013-08-08T19:19:44Z
|
| tenant_id | d1e3aae242f14c488d2225dcbf1e96d6
|
| OS-DCF:diskConfig | MANUAL
|
| metadata | {}
|
| accessIPv4 |
|
| accessIPv6 |
|
| progress | 0
|
| OS-EXT-STS:power_state | 0
|
| OS-EXT-AZ:availability_zone | nova
|
| config_drive |
|
+-----------------------------+-----------------------------------------------------------+

I found this new log in the scheduler, it must be related to it:

2013-08-08 19:19:46.000 ERROR nova.openstack.common.rpc.amqp
[req-b049c969-c411-4b76-9aa5-ad88d714c4ab 20390b639d4449c18926dca5e038ec5e
d1e3aae242f14c488d2225dcbf1e96d6] Exception during message handling
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
Traceback (most recent call last):
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
430, in _process_data
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp rval
= self.proxy.dispatch(ctxt, version, method, **args)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
line 133, in dispatch
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
return getattr(proxyobj, method)(ctxt, **kwargs)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 127, in
run_instance
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
'schedule', *instance_uuids):
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 318, in
__enter__
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
self.conductor.action_event_start(self.context, event)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 264, in
action_event_start
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
return self._manager.action_event_start(context, values)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 1348, in wrapper
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
return func(*args, **kwargs)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 334, in
action_event_start
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp evt
= self.db.action_event_start(context, values)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/db/api.py", line 1625, in
action_event_start
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
return IMPL.action_event_start(context, values)
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
"/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 4624, in
action_event_start
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
instance_uuid=values['instance_uuid'])
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
InstanceActionNotFound: Action for request_id
req-b049c969-c411-4b76-9aa5-ad88d714c4ab on instance
b0583cca-63c2-481f-8b94-7aeb2e86641f not found
2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp

It's getting me crazy. "InstanceActionNotFound..." does it mean something
to you?



2013/8/8 Mike Wilson <geekinutah [at] gmail>

> Juan,
>
> If your instance show's the task_state as "scheduling" it's possible that
> your new instance never made it to the scheduler. When you do a nova show
> <instance_id> does it say which host node it is supposed to be on? If it
> does then the scheduler probably did its thing but failed the RPC to the
> compute node, if it doesn't have a host then your problem is that the
> message never made it from nova-api to nova-scheduler.
>
> In any case, it looks like rabbit is not quite setup correctly on your end.
>
> -Mike
>
>
> On Wed, Aug 7, 2013 at 6:55 PM, Juan José Pavlik Salles <
> jjpavlik [at] gmail> wrote:
>
>> Is there any way i can test nova-conductor and nova-scheduler to be sure
>> they are working like they should? If i list nova-manage service list,
>> everything is fine. I'm running out of ideas hahaha.
>>
>>
>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>
>>> According to the doc this problem should be related to some service that
>>> isn't answering to nova-api. I just have 3 servers in my deployment, so i
>>> don't think this is problem related to the amount of messages in the
>>> queues.
>>>
>>>
>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>
>>>> Here i have more information, i tried to boot a vm from the CLI and it
>>>> doesn't really fail. But when i check the vms status in the dashboard it
>>>> says "Scheduling" and never changes its state to "running" or "error".
>>>>
>>>>
>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>
>>>>> I just finished installing everything, i tried to create my first VM
>>>>> from the dashboard but it doesn't work. After choosing flavor and hitting
>>>>> launch it starts "creating" it but after a few seconds it stops saying:
>>>>> "Error: There was an error submitting the form. Please try again.". The
>>>>> only place where i found something related is in nova.log in my compute
>>>>> node, here is the log:
>>>>>
>>>>> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
>>>>> out _error_callback
>>>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
>>>>> *
>>>>> *2013-08-07 18:05:55.479 DEBUG nova.quota
>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
>>>>> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
>>>>> '411f6f70-415e-4a21-aa06-3980070d6095',
>>>>> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
>>>>> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
>>>>> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
>>>>> RPC response.*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback
>>>>> (most recent call last):*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>>>>> in __call__*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>> req.get_response(self.application)*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
>>>>> *
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>> application, catch_exc_info=False)*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>>>>> call_application*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack app_iter
>>>>> = application(self.environ, start_response)*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
>>>>> *
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>> resp(environ, start_response)*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
>>>>> line 450, in __call__*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>> self.app(env, start_response)*
>>>>> *...
>>>>> *
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>>>>> 551, in __iter__
>>>>> *
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>> self._iterator.next()*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>> line 648, in iterconsume*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
>>>>> self.ensure(_error_callback, _consume)*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>> line 566, in ensure*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>> error_callback(e)*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>> line 629, in _error_callback*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
>>>>> rpc_common.Timeout()*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout:
>>>>> Timeout while waiting on RPC response.*
>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
>>>>> *2013-08-07 18:05:55.488 INFO nova.api.openstack
>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>> d1e3aae242f14c488d2225dcbf1e96d6]
>>>>> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
>>>>> *
>>>>> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
>>>>> either erred or is incapable of performing the requested operation.
>>>>> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165
>>>>> *
>>>>> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
>>>>> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
>>>>> time: 60.5262640*
>>>>>
>>>>> A couple of things about my deployment that may help you help me:
>>>>> -One controller node running: nova-conductor, nova-scheduler,
>>>>> keystone, quantum-server, rabbitmq
>>>>> -One compute node running: nova-api, nova-compute, glance
>>>>> -One storage node running cinder
>>>>>
>>>>> My ideas:
>>>>> -I think it could be a problem related to nova-compute using
>>>>> nova-conductor (i really don't know how to tell nova to use it...), somehow
>>>>> messages from nova-compute doesn't reach nova-conductor on the controller
>>>>> node eventhough that nova-compute is connected to rabbit and so is
>>>>> nova-conductor.
>>>>> -I haven't found any message like "wrong password for rabbit" on any
>>>>> log file.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pavlik Salles Juan José
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Pavlik Salles Juan José
>>>>
>>>
>>>
>>>
>>> --
>>> Pavlik Salles Juan José
>>>
>>
>>
>>
>> --
>> Pavlik Salles Juan José
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>


--
Pavlik Salles Juan José



--
Pavlik Salles Juan José


geekinutah at gmail

Aug 8, 2013, 6:34 PM

Post #2 of 4 (20 views)
Permalink
Re: Fwd: Can't start a vm, from dashboard [In reply to]

Anytime you call the nova API and ask it to boot an instance, before it
starts handing things off the the scheduler it creates an action in the
database. Your scheduler is complaining that it can't find that action. So
in your initial email you don't say where your database is. Are you running
a mysql host somewhere? Nova-api is going to need to be able to get at the
same database as nova-scheduler. I would start poking around there and see
if that's your problem.

-Mike


On Thu, Aug 8, 2013 at 6:06 PM, Juan José Pavlik Salles
<jjpavlik [at] gmail>wrote:

>
> You are right Wilson, there's no hostiD in the vm description:
>
>
> +-----------------------------+-----------------------------------------------------------+
> | Property | Value
> |
>
> +-----------------------------+-----------------------------------------------------------+
> | status | BUILD
> |
> | updated | 2013-08-08T19:23:01Z
> |
> | OS-EXT-STS:task_state | scheduling
> |
> | key_name | None
> |
> | image | Ubuntu 12.04.2 LTS
> (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |
> | hostId |
> |
> | OS-EXT-STS:vm_state | building
> |
> | flavor | m1.tiny (1)
> |
> | id | b0583cca-63c2-481f-8b94-7aeb2e86641f
> |
> | security_groups | [{u'name': u'default'}]
> |
> | user_id | 20390b639d4449c18926dca5e038ec5e
> |
> | name | prueba11
> |
> | created | 2013-08-08T19:19:44Z
> |
> | tenant_id | d1e3aae242f14c488d2225dcbf1e96d6
> |
> | OS-DCF:diskConfig | MANUAL
> |
> | metadata | {}
> |
> | accessIPv4 |
> |
> | accessIPv6 |
> |
> | progress | 0
> |
> | OS-EXT-STS:power_state | 0
> |
> | OS-EXT-AZ:availability_zone | nova
> |
> | config_drive |
> |
>
> +-----------------------------+-----------------------------------------------------------+
>
> I found this new log in the scheduler, it must be related to it:
>
> 2013-08-08 19:19:46.000 ERROR nova.openstack.common.rpc.amqp
> [req-b049c969-c411-4b76-9aa5-ad88d714c4ab 20390b639d4449c18926dca5e038ec5e
> d1e3aae242f14c488d2225dcbf1e96d6] Exception during message handling
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> Traceback (most recent call last):
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
> 430, in _process_data
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> rval = self.proxy.dispatch(ctxt, version, method, **args)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
> line 133, in dispatch
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> return getattr(proxyobj, method)(ctxt, **kwargs)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 127, in
> run_instance
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> 'schedule', *instance_uuids):
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 318, in
> __enter__
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> self.conductor.action_event_start(self.context, event)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 264, in
> action_event_start
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> return self._manager.action_event_start(context, values)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 1348, in wrapper
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> return func(*args, **kwargs)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 334, in
> action_event_start
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp evt
> = self.db.action_event_start(context, values)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 1625, in
> action_event_start
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> return IMPL.action_event_start(context, values)
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 4624, in
> action_event_start
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> instance_uuid=values['instance_uuid'])
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
> InstanceActionNotFound: Action for request_id
> req-b049c969-c411-4b76-9aa5-ad88d714c4ab on instance
> b0583cca-63c2-481f-8b94-7aeb2e86641f not found
> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>
> It's getting me crazy. "InstanceActionNotFound..." does it mean something
> to you?
>
>
>
> 2013/8/8 Mike Wilson <geekinutah [at] gmail>
>
>> Juan,
>>
>> If your instance show's the task_state as "scheduling" it's possible that
>> your new instance never made it to the scheduler. When you do a nova show
>> <instance_id> does it say which host node it is supposed to be on? If it
>> does then the scheduler probably did its thing but failed the RPC to the
>> compute node, if it doesn't have a host then your problem is that the
>> message never made it from nova-api to nova-scheduler.
>>
>> In any case, it looks like rabbit is not quite setup correctly on your
>> end.
>>
>> -Mike
>>
>>
>> On Wed, Aug 7, 2013 at 6:55 PM, Juan José Pavlik Salles <
>> jjpavlik [at] gmail> wrote:
>>
>>> Is there any way i can test nova-conductor and nova-scheduler to be sure
>>> they are working like they should? If i list nova-manage service list,
>>> everything is fine. I'm running out of ideas hahaha.
>>>
>>>
>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>
>>>> According to the doc this problem should be related to some service
>>>> that isn't answering to nova-api. I just have 3 servers in my deployment,
>>>> so i don't think this is problem related to the amount of messages in the
>>>> queues.
>>>>
>>>>
>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>
>>>>> Here i have more information, i tried to boot a vm from the CLI and it
>>>>> doesn't really fail. But when i check the vms status in the dashboard it
>>>>> says "Scheduling" and never changes its state to "running" or "error".
>>>>>
>>>>>
>>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>>
>>>>>> I just finished installing everything, i tried to create my first VM
>>>>>> from the dashboard but it doesn't work. After choosing flavor and hitting
>>>>>> launch it starts "creating" it but after a few seconds it stops saying:
>>>>>> "Error: There was an error submitting the form. Please try again.". The
>>>>>> only place where i found something related is in nova.log in my compute
>>>>>> node, here is the log:
>>>>>>
>>>>>> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
>>>>>> out _error_callback
>>>>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
>>>>>> *
>>>>>> *2013-08-07 18:05:55.479 DEBUG nova.quota
>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
>>>>>> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
>>>>>> '411f6f70-415e-4a21-aa06-3980070d6095',
>>>>>> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
>>>>>> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
>>>>>> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
>>>>>> RPC response.*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback
>>>>>> (most recent call last):*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>>>>>> in __call__*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>> req.get_response(self.application)*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
>>>>>> *
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>> application, catch_exc_info=False)*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>>>>>> call_application*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack app_iter
>>>>>> = application(self.environ, start_response)*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
>>>>>> *
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>> resp(environ, start_response)*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
>>>>>> line 450, in __call__*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>> self.app(env, start_response)*
>>>>>> *...
>>>>>> *
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>>>>>> 551, in __iter__
>>>>>> *
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>> self._iterator.next()*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>> line 648, in iterconsume*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
>>>>>> self.ensure(_error_callback, _consume)*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>> line 566, in ensure*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>> error_callback(e)*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>> line 629, in _error_callback*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
>>>>>> rpc_common.Timeout()*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout:
>>>>>> Timeout while waiting on RPC response.*
>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
>>>>>> *2013-08-07 18:05:55.488 INFO nova.api.openstack
>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>> d1e3aae242f14c488d2225dcbf1e96d6]
>>>>>> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
>>>>>> *
>>>>>> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
>>>>>> either erred or is incapable of performing the requested operation.
>>>>>> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165
>>>>>> *
>>>>>> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
>>>>>> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
>>>>>> time: 60.5262640*
>>>>>>
>>>>>> A couple of things about my deployment that may help you help me:
>>>>>> -One controller node running: nova-conductor, nova-scheduler,
>>>>>> keystone, quantum-server, rabbitmq
>>>>>> -One compute node running: nova-api, nova-compute, glance
>>>>>> -One storage node running cinder
>>>>>>
>>>>>> My ideas:
>>>>>> -I think it could be a problem related to nova-compute using
>>>>>> nova-conductor (i really don't know how to tell nova to use it...), somehow
>>>>>> messages from nova-compute doesn't reach nova-conductor on the controller
>>>>>> node eventhough that nova-compute is connected to rabbit and so is
>>>>>> nova-conductor.
>>>>>> -I haven't found any message like "wrong password for rabbit" on any
>>>>>> log file.
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Pavlik Salles Juan José
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pavlik Salles Juan José
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Pavlik Salles Juan José
>>>>
>>>
>>>
>>>
>>> --
>>> Pavlik Salles Juan José
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
>
> --
> Pavlik Salles Juan José
>
>
>
> --
> Pavlik Salles Juan José
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators [at] lists
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


jjpavlik at gmail

Aug 8, 2013, 7:13 PM

Post #3 of 4 (20 views)
Permalink
Re: Fwd: Can't start a vm, from dashboard [In reply to]

I really appreciate you help Mike! You saved me lots of hours!!! I didn't
now nova-api needs access to the DB so i never included

sql_connection=mysql://nova:PASSWORD [at] DB_HOS/nova

in the nova.conf of my compute node. I just added the line and everything
start working!!! This makes me think i should move nova-api to the
controller node instead of the compute node.

Thanks again!!!




2013/8/8 Mike Wilson <geekinutah [at] gmail>

> Anytime you call the nova API and ask it to boot an instance, before it
> starts handing things off the the scheduler it creates an action in the
> database. Your scheduler is complaining that it can't find that action. So
> in your initial email you don't say where your database is. Are you running
> a mysql host somewhere? Nova-api is going to need to be able to get at the
> same database as nova-scheduler. I would start poking around there and see
> if that's your problem.
>
> -Mike
>
>
> On Thu, Aug 8, 2013 at 6:06 PM, Juan José Pavlik Salles <
> jjpavlik [at] gmail> wrote:
>
>>
>> You are right Wilson, there's no hostiD in the vm description:
>>
>>
>> +-----------------------------+-----------------------------------------------------------+
>> | Property | Value
>> |
>>
>> +-----------------------------+-----------------------------------------------------------+
>> | status | BUILD
>> |
>> | updated | 2013-08-08T19:23:01Z
>> |
>> | OS-EXT-STS:task_state | scheduling
>> |
>> | key_name | None
>> |
>> | image | Ubuntu 12.04.2 LTS
>> (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |
>> | hostId |
>> |
>> | OS-EXT-STS:vm_state | building
>> |
>> | flavor | m1.tiny (1)
>> |
>> | id | b0583cca-63c2-481f-8b94-7aeb2e86641f
>> |
>> | security_groups | [{u'name': u'default'}]
>> |
>> | user_id | 20390b639d4449c18926dca5e038ec5e
>> |
>> | name | prueba11
>> |
>> | created | 2013-08-08T19:19:44Z
>> |
>> | tenant_id | d1e3aae242f14c488d2225dcbf1e96d6
>> |
>> | OS-DCF:diskConfig | MANUAL
>> |
>> | metadata | {}
>> |
>> | accessIPv4 |
>> |
>> | accessIPv6 |
>> |
>> | progress | 0
>> |
>> | OS-EXT-STS:power_state | 0
>> |
>> | OS-EXT-AZ:availability_zone | nova
>> |
>> | config_drive |
>> |
>>
>> +-----------------------------+-----------------------------------------------------------+
>>
>> I found this new log in the scheduler, it must be related to it:
>>
>> 2013-08-08 19:19:46.000 ERROR nova.openstack.common.rpc.amqp
>> [req-b049c969-c411-4b76-9aa5-ad88d714c4ab 20390b639d4449c18926dca5e038ec5e
>> d1e3aae242f14c488d2225dcbf1e96d6] Exception during message handling
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> Traceback (most recent call last):
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>> 430, in _process_data
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> rval = self.proxy.dispatch(ctxt, version, method, **args)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
>> line 133, in dispatch
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> return getattr(proxyobj, method)(ctxt, **kwargs)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 127, in
>> run_instance
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> 'schedule', *instance_uuids):
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 318, in
>> __enter__
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> self.conductor.action_event_start(self.context, event)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 264, in
>> action_event_start
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> return self._manager.action_event_start(context, values)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 1348, in wrapper
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> return func(*args, **kwargs)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 334, in
>> action_event_start
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> evt = self.db.action_event_start(context, values)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 1625, in
>> action_event_start
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> return IMPL.action_event_start(context, values)
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp File
>> "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 4624, in
>> action_event_start
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> instance_uuid=values['instance_uuid'])
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>> InstanceActionNotFound: Action for request_id
>> req-b049c969-c411-4b76-9aa5-ad88d714c4ab on instance
>> b0583cca-63c2-481f-8b94-7aeb2e86641f not found
>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>
>> It's getting me crazy. "InstanceActionNotFound..." does it mean something
>> to you?
>>
>>
>>
>> 2013/8/8 Mike Wilson <geekinutah [at] gmail>
>>
>>> Juan,
>>>
>>> If your instance show's the task_state as "scheduling" it's possible
>>> that your new instance never made it to the scheduler. When you do a nova
>>> show <instance_id> does it say which host node it is supposed to be on? If
>>> it does then the scheduler probably did its thing but failed the RPC to the
>>> compute node, if it doesn't have a host then your problem is that the
>>> message never made it from nova-api to nova-scheduler.
>>>
>>> In any case, it looks like rabbit is not quite setup correctly on your
>>> end.
>>>
>>> -Mike
>>>
>>>
>>> On Wed, Aug 7, 2013 at 6:55 PM, Juan José Pavlik Salles <
>>> jjpavlik [at] gmail> wrote:
>>>
>>>> Is there any way i can test nova-conductor and nova-scheduler to be
>>>> sure they are working like they should? If i list nova-manage service list,
>>>> everything is fine. I'm running out of ideas hahaha.
>>>>
>>>>
>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>
>>>>> According to the doc this problem should be related to some service
>>>>> that isn't answering to nova-api. I just have 3 servers in my deployment,
>>>>> so i don't think this is problem related to the amount of messages in the
>>>>> queues.
>>>>>
>>>>>
>>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>>
>>>>>> Here i have more information, i tried to boot a vm from the CLI and
>>>>>> it doesn't really fail. But when i check the vms status in the dashboard it
>>>>>> says "Scheduling" and never changes its state to "running" or "error".
>>>>>>
>>>>>>
>>>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>>>
>>>>>>> I just finished installing everything, i tried to create my first VM
>>>>>>> from the dashboard but it doesn't work. After choosing flavor and hitting
>>>>>>> launch it starts "creating" it but after a few seconds it stops saying:
>>>>>>> "Error: There was an error submitting the form. Please try again.". The
>>>>>>> only place where i found something related is in nova.log in my compute
>>>>>>> node, here is the log:
>>>>>>>
>>>>>>> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
>>>>>>> out _error_callback
>>>>>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.479 DEBUG nova.quota
>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
>>>>>>> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
>>>>>>> '411f6f70-415e-4a21-aa06-3980070d6095',
>>>>>>> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
>>>>>>> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
>>>>>>> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
>>>>>>> RPC response.*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback
>>>>>>> (most recent call last):*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>>>>>>> in __call__*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>>> req.get_response(self.application)*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>> application, catch_exc_info=False)*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>>>>>>> call_application*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>> app_iter = application(self.environ, start_response)*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>>> resp(environ, start_response)*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
>>>>>>> line 450, in __call__*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>>> self.app(env, start_response)*
>>>>>>> *...
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>>>>>>> 551, in __iter__
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>> self._iterator.next()*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>>> line 648, in iterconsume*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
>>>>>>> self.ensure(_error_callback, _consume)*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>>> line 566, in ensure*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>> error_callback(e)*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>>> line 629, in _error_callback*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
>>>>>>> rpc_common.Timeout()*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout:
>>>>>>> Timeout while waiting on RPC response.*
>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
>>>>>>> *2013-08-07 18:05:55.488 INFO nova.api.openstack
>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6]
>>>>>>> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
>>>>>>> either erred or is incapable of performing the requested operation.
>>>>>>> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165
>>>>>>> *
>>>>>>> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
>>>>>>> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
>>>>>>> time: 60.5262640*
>>>>>>>
>>>>>>> A couple of things about my deployment that may help you help me:
>>>>>>> -One controller node running: nova-conductor, nova-scheduler,
>>>>>>> keystone, quantum-server, rabbitmq
>>>>>>> -One compute node running: nova-api, nova-compute, glance
>>>>>>> -One storage node running cinder
>>>>>>>
>>>>>>> My ideas:
>>>>>>> -I think it could be a problem related to nova-compute using
>>>>>>> nova-conductor (i really don't know how to tell nova to use it...), somehow
>>>>>>> messages from nova-compute doesn't reach nova-conductor on the controller
>>>>>>> node eventhough that nova-compute is connected to rabbit and so is
>>>>>>> nova-conductor.
>>>>>>> -I haven't found any message like "wrong password for rabbit" on any
>>>>>>> log file.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Pavlik Salles Juan José
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Pavlik Salles Juan José
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pavlik Salles Juan José
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Pavlik Salles Juan José
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators [at] lists
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>
>>>>
>>>
>>
>>
>> --
>> Pavlik Salles Juan José
>>
>>
>>
>> --
>> Pavlik Salles Juan José
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators [at] lists
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>


--
Pavlik Salles Juan José


geekinutah at gmail

Aug 9, 2013, 12:25 PM

Post #4 of 4 (15 views)
Permalink
Re: Fwd: Can't start a vm, from dashboard [In reply to]

Juan,

Good to hear! I think the general recommendation is to have a set of
controller nodes that run things like nova-api, nova-scheduler,
nova-consoleauth, etc. The compute nodes should only run things that the
hypervisor needs to spin up VMs. For example, nova-compute, nova-network in
some cases or perhaps the neutron agents if you are running that way. Good
luck with your deployment! I have read through the Operations Guide (
http://docs.openstack.org/ops/) and find it to be a really great resource.
I would recommend you check it out as it has more information about
suggested configurations of Openstack.

-Mike


On Thu, Aug 8, 2013 at 8:13 PM, Juan José Pavlik Salles
<jjpavlik [at] gmail>wrote:

> I really appreciate you help Mike! You saved me lots of hours!!! I didn't
> now nova-api needs access to the DB so i never included
>
> sql_connection=mysql://nova:PASSWORD [at] DB_HOS/nova
>
> in the nova.conf of my compute node. I just added the line and everything
> start working!!! This makes me think i should move nova-api to the
> controller node instead of the compute node.
>
> Thanks again!!!
>
>
>
>
> 2013/8/8 Mike Wilson <geekinutah [at] gmail>
>
>> Anytime you call the nova API and ask it to boot an instance, before it
>> starts handing things off the the scheduler it creates an action in the
>> database. Your scheduler is complaining that it can't find that action. So
>> in your initial email you don't say where your database is. Are you running
>> a mysql host somewhere? Nova-api is going to need to be able to get at the
>> same database as nova-scheduler. I would start poking around there and see
>> if that's your problem.
>>
>> -Mike
>>
>>
>> On Thu, Aug 8, 2013 at 6:06 PM, Juan José Pavlik Salles <
>> jjpavlik [at] gmail> wrote:
>>
>>>
>>> You are right Wilson, there's no hostiD in the vm description:
>>>
>>>
>>> +-----------------------------+-----------------------------------------------------------+
>>> | Property | Value
>>> |
>>>
>>> +-----------------------------+-----------------------------------------------------------+
>>> | status | BUILD
>>> |
>>> | updated | 2013-08-08T19:23:01Z
>>> |
>>> | OS-EXT-STS:task_state | scheduling
>>> |
>>> | key_name | None
>>> |
>>> | image | Ubuntu 12.04.2 LTS
>>> (1359ca8d-23a2-40e8-940f-d90b3e68bb39) |
>>> | hostId |
>>> |
>>> | OS-EXT-STS:vm_state | building
>>> |
>>> | flavor | m1.tiny (1)
>>> |
>>> | id | b0583cca-63c2-481f-8b94-7aeb2e86641f
>>> |
>>> | security_groups | [{u'name': u'default'}]
>>> |
>>> | user_id | 20390b639d4449c18926dca5e038ec5e
>>> |
>>> | name | prueba11
>>> |
>>> | created | 2013-08-08T19:19:44Z
>>> |
>>> | tenant_id | d1e3aae242f14c488d2225dcbf1e96d6
>>> |
>>> | OS-DCF:diskConfig | MANUAL
>>> |
>>> | metadata | {}
>>> |
>>> | accessIPv4 |
>>> |
>>> | accessIPv6 |
>>> |
>>> | progress | 0
>>> |
>>> | OS-EXT-STS:power_state | 0
>>> |
>>> | OS-EXT-AZ:availability_zone | nova
>>> |
>>> | config_drive |
>>> |
>>>
>>> +-----------------------------+-----------------------------------------------------------+
>>>
>>> I found this new log in the scheduler, it must be related to it:
>>>
>>> 2013-08-08 19:19:46.000 ERROR nova.openstack.common.rpc.amqp
>>> [req-b049c969-c411-4b76-9aa5-ad88d714c4ab 20390b639d4449c18926dca5e038ec5e
>>> d1e3aae242f14c488d2225dcbf1e96d6] Exception during message handling
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> Traceback (most recent call last):
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py",
>>> line 430, in _process_data
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> rval = self.proxy.dispatch(ctxt, version, method, **args)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File
>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
>>> line 133, in dispatch
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> return getattr(proxyobj, method)(ctxt, **kwargs)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line
>>> 127, in run_instance
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> 'schedule', *instance_uuids):
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/compute/utils.py", line 318, in
>>> __enter__
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> self.conductor.action_event_start(self.context, event)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 264, in
>>> action_event_start
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> return self._manager.action_event_start(context, values)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 1348, in wrapper
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> return func(*args, **kwargs)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line
>>> 334, in action_event_start
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> evt = self.db.action_event_start(context, values)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 1625, in
>>> action_event_start
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> return IMPL.action_event_start(context, values)
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line
>>> 4624, in action_event_start
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> instance_uuid=values['instance_uuid'])
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>> InstanceActionNotFound: Action for request_id
>>> req-b049c969-c411-4b76-9aa5-ad88d714c4ab on instance
>>> b0583cca-63c2-481f-8b94-7aeb2e86641f not found
>>> 2013-08-08 19:19:46.000 29114 TRACE nova.openstack.common.rpc.amqp
>>>
>>> It's getting me crazy. "InstanceActionNotFound..." does it mean
>>> something to you?
>>>
>>>
>>>
>>> 2013/8/8 Mike Wilson <geekinutah [at] gmail>
>>>
>>>> Juan,
>>>>
>>>> If your instance show's the task_state as "scheduling" it's possible
>>>> that your new instance never made it to the scheduler. When you do a nova
>>>> show <instance_id> does it say which host node it is supposed to be on? If
>>>> it does then the scheduler probably did its thing but failed the RPC to the
>>>> compute node, if it doesn't have a host then your problem is that the
>>>> message never made it from nova-api to nova-scheduler.
>>>>
>>>> In any case, it looks like rabbit is not quite setup correctly on your
>>>> end.
>>>>
>>>> -Mike
>>>>
>>>>
>>>> On Wed, Aug 7, 2013 at 6:55 PM, Juan José Pavlik Salles <
>>>> jjpavlik [at] gmail> wrote:
>>>>
>>>>> Is there any way i can test nova-conductor and nova-scheduler to be
>>>>> sure they are working like they should? If i list nova-manage service list,
>>>>> everything is fine. I'm running out of ideas hahaha.
>>>>>
>>>>>
>>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>>
>>>>>> According to the doc this problem should be related to some service
>>>>>> that isn't answering to nova-api. I just have 3 servers in my deployment,
>>>>>> so i don't think this is problem related to the amount of messages in the
>>>>>> queues.
>>>>>>
>>>>>>
>>>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>>>
>>>>>>> Here i have more information, i tried to boot a vm from the CLI and
>>>>>>> it doesn't really fail. But when i check the vms status in the dashboard it
>>>>>>> says "Scheduling" and never changes its state to "running" or "error".
>>>>>>>
>>>>>>>
>>>>>>> 2013/8/7 Juan José Pavlik Salles <jjpavlik [at] gmail>
>>>>>>>
>>>>>>>> I just finished installing everything, i tried to create my first
>>>>>>>> VM from the dashboard but it doesn't work. After choosing flavor and
>>>>>>>> hitting launch it starts "creating" it but after a few seconds it stops
>>>>>>>> saying: "Error: There was an error submitting the form. Please try again.".
>>>>>>>> The only place where i found something related is in nova.log in my compute
>>>>>>>> node, here is the log:
>>>>>>>>
>>>>>>>> *2013-08-07 18:05:55.293 DEBUG nova.openstack.common.rpc.common
>>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Timed out waiting for RPC response: timed
>>>>>>>> out _error_callback
>>>>>>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py:628
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.479 DEBUG nova.quota
>>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Rolled back reservations
>>>>>>>> ['3e941a2b-2cc6-4f01-8dc1-13dc09369141',
>>>>>>>> '411f6f70-415e-4a21-aa06-3980070d6095',
>>>>>>>> 'd4791eb7-b75a-4ab8-bfdb-5d5cd201e40d'] rollback
>>>>>>>> /usr/lib/python2.7/dist-packages/nova/quota.py:1012*
>>>>>>>> *2013-08-07 18:05:55.480 ERROR nova.api.openstack
>>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Caught error: Timeout while waiting on
>>>>>>>> RPC response.*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Traceback
>>>>>>>> (most recent call last):*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 81,
>>>>>>>> in __call__*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>>>> req.get_response(self.application)*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>>> application, catch_exc_info=False)*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in
>>>>>>>> call_application*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>>> app_iter = application(self.environ, start_response)*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>>>> resp(environ, start_response)*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py",
>>>>>>>> line 450, in __call__*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack return
>>>>>>>> self.app(env, start_response)*
>>>>>>>> *...
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
>>>>>>>> 551, in __iter__
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>>> self._iterator.next()*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>>>> line 648, in iterconsume*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack yield
>>>>>>>> self.ensure(_error_callback, _consume)*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>>>> line 566, in ensure*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack
>>>>>>>> error_callback(e)*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack File
>>>>>>>> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py",
>>>>>>>> line 629, in _error_callback*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack raise
>>>>>>>> rpc_common.Timeout()*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack Timeout:
>>>>>>>> Timeout while waiting on RPC response.*
>>>>>>>> *2013-08-07 18:05:55.480 29278 TRACE nova.api.openstack *
>>>>>>>> *2013-08-07 18:05:55.488 INFO nova.api.openstack
>>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6]
>>>>>>>> http://172.19.136.13:8774/v2/d1e3aae242f14c488d2225dcbf1e96d6/serversreturned with HTTP 500
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.488 DEBUG nova.api.openstack.wsgi
>>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] Returning 500 to user: The server has
>>>>>>>> either erred or is incapable of performing the requested operation.
>>>>>>>> __call__ /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:1165
>>>>>>>> *
>>>>>>>> *2013-08-07 18:05:55.489 INFO nova.osapi_compute.wsgi.server
>>>>>>>> [req-0cfe760f-2e74-4e92-919c-663ba02c7f2f 20390b639d4449c18926dca5e038ec5e
>>>>>>>> d1e3aae242f14c488d2225dcbf1e96d6] 172.19.136.13 "POST
>>>>>>>> /v2/d1e3aae242f14c488d2225dcbf1e96d6/servers HTTP/1.1" status: 500 len: 335
>>>>>>>> time: 60.5262640*
>>>>>>>>
>>>>>>>> A couple of things about my deployment that may help you help me:
>>>>>>>> -One controller node running: nova-conductor, nova-scheduler,
>>>>>>>> keystone, quantum-server, rabbitmq
>>>>>>>> -One compute node running: nova-api, nova-compute, glance
>>>>>>>> -One storage node running cinder
>>>>>>>>
>>>>>>>> My ideas:
>>>>>>>> -I think it could be a problem related to nova-compute using
>>>>>>>> nova-conductor (i really don't know how to tell nova to use it...), somehow
>>>>>>>> messages from nova-compute doesn't reach nova-conductor on the controller
>>>>>>>> node eventhough that nova-compute is connected to rabbit and so is
>>>>>>>> nova-conductor.
>>>>>>>> -I haven't found any message like "wrong password for rabbit" on
>>>>>>>> any log file.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Pavlik Salles Juan José
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Pavlik Salles Juan José
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Pavlik Salles Juan José
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pavlik Salles Juan José
>>>>>
>>>>> _______________________________________________
>>>>> OpenStack-operators mailing list
>>>>> OpenStack-operators [at] lists
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Pavlik Salles Juan José
>>>
>>>
>>>
>>> --
>>> Pavlik Salles Juan José
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators [at] lists
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
>
> --
> Pavlik Salles Juan José
>

OpenStack operators RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.