Ask Your Question

Y Sertdemir's profile - activity

2016-09-06 20:53:53 -0600 received badge  Favorite Question (source)
2016-07-02 11:58:56 -0600 received badge  Good Answer (source)
2015-06-15 11:49:04 -0600 received badge  Famous Question (source)
2015-04-12 01:41:31 -0600 received badge  Taxonomist
2015-04-09 11:39:14 -0600 received badge  Famous Question (source)
2015-03-25 08:57:31 -0600 received badge  Nice Answer (source)
2015-01-05 03:52:55 -0600 received badge  Famous Question (source)
2015-01-05 03:52:55 -0600 received badge  Famous Question (source)
2014-10-16 11:24:57 -0600 received badge  Notable Question (source)
2014-10-16 11:24:57 -0600 received badge  Popular Question (source)
2014-10-16 11:24:57 -0600 received badge  Famous Question (source)
2014-09-29 08:11:55 -0600 commented answer Defining default user-password for Ubuntu Cloud image

Unfortunately, it was working with Grizzly version. At Havana, I guess they updated the cloud-config package and there is no password generation.

2014-07-29 19:52:48 -0600 marked best answer Upgrade and Migrate to Havana

Hello,

I want to upgrade and migrate my Openstack environment Grizzly to Havana. Servers are Ubuntu 12.04 LTS and network manager is Quantum. My controller and network node installed on different servers. I want to both migrate and upgrade services one by one. I want to move controller and network nodes to other servers. For example, I want to stop keystone service on old controller server, and move the same config file to new server and start the havana Keystone service on the new controller server. That way, I do not need to backup old services and I can roll back whenever I want. Is it a good plan?

Another question is that upgrading quantum to neutron. While quantum service is not working, do my instances lost connection?

2014-07-16 03:35:00 -0600 received badge  Notable Question (source)
2014-07-04 02:56:29 -0600 received badge  Popular Question (source)
2014-07-02 00:52:42 -0600 asked a question Deleted image, cannot boot VM

Hello,

I just need to move instance's /var/lib/nova/instances directory to another storage, so I have done it with this way: Shutdown the VM, mount the new volume to /var/lib/nova/instances directory and copy the VM files to here from old storage. The problem is there were deleted images of some of the instances, so they gave me "image not found" error on nova-compute logs. I investigated the issue and found out that _base directory includes old images of VM. I wonder why do the instances needs these files? Secondly, If there are lots of deleted images, how can I evacuate the server to the another host? It will give same error on new host. Lastly, I found that "Members" who are given permission to see specific project cannot see the image name in the instances list in dashboard. It is written "(not found)". When I check the same instances with admin rights, the image name written there. I checked the glance commands and I can list the deleted image ID with "deleted" tag.

My openstack environment is based on Ubuntu 12.04/havana.

2014-06-30 06:16:18 -0600 received badge  Famous Question (source)
2014-06-09 12:03:04 -0600 marked best answer How to add users from Active Directory/LDAP to an existing Keystone?

Hello,

I am using default keystone installation on Grizzly, I guess it is sql backed. I want to get openstack users to get from Active Directory server with LDAP integration. Can I do it without changing sevice users? I have nova glance etc users, do I need to migrate them to active directory?

2014-05-21 01:44:26 -0600 commented question Unable to find procedure for LBaaS in icehouse

I did not try on redhat but It should be the same I guess.

2014-05-21 01:41:56 -0600 commented question cirros blank in console

Check the instance Log page. If there is nothing written there, you have misconfigured something on your setup. Check nova logs on your controller and compute nodes.

2014-05-21 01:38:44 -0600 received badge  Famous Question (source)
2014-05-12 03:25:55 -0600 received badge  Famous Question (source)
2014-05-06 03:53:51 -0600 received badge  Famous Question (source)
2014-04-24 01:23:34 -0600 commented answer Metadata Problem in Neutron Flat Network

Thanks for info.

2014-04-22 08:56:25 -0600 received badge  Notable Question (source)
2014-04-22 01:42:44 -0600 answered a question Metadata Problem in Neutron Flat Network

I finally found the solution in this forum: http://lists.openstack.org/pipermail/...

When you use flat networking you have two options:

  1. You need to use metadata via dhcp namespace (Source: http://techbackground.blogspot.ie/201... ) I did not test this setup because I cannot change my subnet right now. The logic is not to use default gateway option in subnet, and enter 0.0.0.0/0 route to subnet manually. (Check discussion part of the link)

  2. You need to create a router in network node in Openstack, create port on your instance network for this router and choose an IP different than your real router. This router will be used just for metadata. Install neutron-metadata-agent to network node where you install l3 agent. Configure your nova.conf in compute and controller nodes:

    metadata_host=192.168.0.1

    service_neutron_metadata_proxy = True

    neutron_metadata_proxy_shared_secret = x1x1x1x1x

Configure your metadata_agent.ini in network node:

auth_url = http://192.168.0.1:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = xxxxxxx
nova_metadata_ip = 192.168.0.1
nova_metadata_port = 8775
metadata_proxy_shared_secret = x1x1x1x1x
2014-04-21 08:51:45 -0600 received badge  Popular Question (source)
2014-04-20 20:11:55 -0600 received badge  Notable Question (source)
2014-04-17 09:58:09 -0600 asked a question Metadata Problem in Neutron Flat Network

Hello,

My setup is Ubuntu-12.04.04 with saucy kernel. I setup my environment with neutron flat network. I have a management network used for services and a shared network for instances. DHCP and instance networking works fine but metadata does not work. I do not use l3 agent or metadata agent on neutron server because I have a flat network anyway. My instances gets their IP fine but when they try to get instance-data from metadata, they fail, and give timeout error.

console.log from a Cirros image, (Same error with Ubuntu images)

checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 1.13. request failed
failed 2/20: up 13.18. request failed
failed 3/20: up 25.20. request failed
failed 4/20: up 37.23. request failed
failed 5/20: up 49.25. request failed
failed 6/20: up 61.27. request failed
failed 7/20: up 73.29. request failed
failed 8/20: up 85.32. request failed
failed 9/20: up 97.34. request failed
failed 10/20: up 109.36. request failed
failed 11/20: up 121.38. request failed
failed 12/20: up 133.40. request failed
failed 13/20: up 145.43. request failed
failed 14/20: up 157.45. request failed
failed 15/20: up 169.48. request failed
failed 16/20: up 181.50. request failed
failed 17/20: up 193.52. request failed
failed 18/20: up 205.54. request failed
failed 19/20: up 217.56. request failed
failed 20/20: up 229.58. request failed
failed to read iid from metadata. tried 20

In cirros image, I tried to connect my metadata IP with "curl http://172.25.166.1:8775/" it responses fine, but when I try to get "curl http://169.254.169.254/" it does not return anything and lastly time out.

Because of this problem, my Ubuntu image does not work. I am not sure when I disable metadata service, whether Ubuntu images generate password or not. I tried to edit my ubuntu image to create static password, it does not dedicate password to the user.

1 - I tried almost all different settings in nova.conf for metadata but nothing worked. I guess neutron flat networking needs a NAT definition on compute host or something, but I cannot see any NAT definition anywhere in iptables. 2 - I tried to install nova-api-metadata service on compute host. It does not effect anything, do I really need this service on Compute nodes or not? 3 - If I disable metadata service in nova, do Ubuntu images generate cloud.cfg?

2014-04-15 08:21:12 -0600 received badge  Notable Question (source)
2014-04-15 08:08:05 -0600 commented answer Neutron DHCP agent cannot ping gateway IP

Progress: I realize that, my network node is virtual machine in a vmware cluster. In vCenter, promiscuous is disabled in default settings. I enabled the promiscuous in the port and not my dhcp agent can ping outside. My metadata problem still continues.

2014-04-15 05:57:43 -0600 received badge  Notable Question (source)
2014-04-15 00:40:47 -0600 commented answer Neutron DHCP agent cannot ping gateway IP

I found out that, instances send packets to our physical router. There are logs in the router that instance trys to reach 169.254.169.254:80 IP but it fails of course.

2014-04-14 08:06:03 -0600 received badge  Popular Question (source)
2014-04-14 07:37:01 -0600 answered a question Neutron DHCP agent cannot ping gateway IP

I have a controller server, networking server and 3 compute host. They all in different servers. My setup is flat network so there is no L3 router. There is a single network and shared.

The configs files: dhcp_agent:

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

neutron.conf on Network Node:

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 172.25.166.1
rabbit_userid = xxxxxxxxxxxxxxxxx
rabbit_password = xxxxxxxxxxxx
notification_driver = neutron.openstack.common.notifier.rpc_notifier

[keystone_authtoken]
auth_url = http://172.25.166.1:35357/v2.0
auth_host = 172.25.166.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = xxxxx
signing_dir = $state_path/keystone-signing

[database]
connection = mysql://xxxxxx:xxxx@172.25.166.1/neutron

Neutron OVS Plugin config:

network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[DATABASE]
connection = mysql://neutron:Neutron159@172.25.166.1/neutron
2014-04-13 07:50:37 -0600 asked a question Neutron DHCP agent cannot ping gateway IP

Hello

I am trying to configure neutron flat network configuration for multi-host environmen. I am using Ubuntu 12.04 saucy kernel, havana packages. I followed openstack document on simple install networking http://docs.openstack.org/havana/inst...

Ubuntu image can get IP in first creation, but it cannot get metadata. At console.log it displays timeout error message.

ci-info: eth0  : 1 172.32.101.52   255.255.255.0   fa:16:3e:ef:1d:cd
ci-info: route-0: 0.0.0.0         172.32.101.254  0.0.0.0         eth0   UG
ci-info: route-1: 172.32.101.0    0.0.0.0         255.255.255.0   eth0   U
cloud-init start running: Sun, 13 Apr 2014 12:21:36 +0000. up 3.60 seconds
2014-04-13 12:22:27,617 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: url error [timed out]
2014-04-13 12:23:18,670 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: url error [timed out]
2014-04-13 12:23:36,693 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: url error [timed out]
2014-04-13 12:23:37,697 - DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds

Because of this error, ubuntu image cannot generate username and password randomly in first creation.

I investigated the networking node which runs neutron dhcp agent and plugin agent;

I tried to ping from dhcp agent to physical gateway, no success:

ip netns exec qdhcp-3b1f3f1d-f64a-4573-92eb-9ae7ab5d0986 ping 172.32.101.254

I tried to ping from dhcp agent to an instance, no success:

ip netns exec qdhcp-3b1f3f1d-f64a-4573-92eb-9ae7ab5d0986 ping 172.32.101.52

I tried to ping to dhcp agent from same subnet, no success:

ping 172.32.101.50

When instance has IP, it is reachable from outside.

  ping 172.32.101.52
PING 172.32.101.52 (172.32.101.52) 56(84) bytes of data.
64 bytes from 172.32.101.52: icmp_req=1 ttl=64 time=2.00 ms
64 bytes from 172.32.101.52: icmp_req=2 ttl=64 time=0.564 ms
64 bytes from 172.32.101.52: icmp_req=3 ttl=64 time=0.374 ms

My ovs-vsctl show output on network node:

Bridge br-int
Port "int-br-eth2"
        Interface "int-br-eth2"
    Port br-int
        Interface br-int
            type: internal
    Port "tapb39f3c76-43"
        tag: 1
        Interface "tapb39f3c76-43"
            type: internal
Bridge "br-eth2"
    Port "br-eth2"
        Interface "br-eth2"
            type: internal
    Port "eth2"
        Interface "eth2"
    Port "phy-br-eth2"
        Interface "phy-br-eth2"
ovs_version: "1.10.2"

How can I fix metadata issue and fix the dhcp agent to reach outside router?

2014-04-09 06:36:58 -0600 asked a question Network Segmentation-ID doesn't match with Port Tags

Hello,

I just upgraded our openstack grizzly to havana. After upgrading quantum to neutron, I lost connections to instances. Instances cannot get IP addresses from dhcp agent.
I am using Per-tenant routers with private networks guide on my environment. It is based Ubuntu 12.04 lts

After I checked almost everything, every log, every conf files. Lastly, I discovered that my internal network's segmentation-ID does not match with tags in network and compute hosts.

Example:

I checked this internal network which has segmentation-id of 5: root@KSFINODE1:/home/ksadmin# neutron net-show IE-InternalNetwork

+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up                           | True                                 |
| id                                                         | c1d56557-57d8-4957-9061-099b4f531b0a |
| name                                                 | IE-InternalNetwork                   |
| provider:network_type            | gre                                  |
| provider:physical_network     |                                      |
| provider:segmentation_id      | 5                                    |
| router:external                              | False                                |
| shared                                             | False                                |
| status                                                | ACTIVE                               |
| subnets                                           | 6c1b0baa-8976-4069-9ef4-d2e72a8f3369 |
| tenant_id                                         | f937bf0a4c8a46ab9262fca6fae20741     |
+---------------------------+--------------------------------------+

I find an example dhcp port name on network which should be on network node:

root@KSFINODE1:/home/ksadmin# neutron port-list |grep 244b | 244b58d8-99c2-4fe9-9c32-3b35c4ac5c53 | | fa:16:3e:c9:9b:4f | {"subnet_id": "6c1b0baa-8976-4069-9ef4-d2e72a8f3369", "ip_address": "10.1.1.9"}

When I checked the port state on network node, I realized that it tagged by 1:

Port "tap244b58d8-99"
            tag: 1
            Interface "tap244b58d8-99"
                type: internal

On the same internal network, I find a instance port on compute node. It is tagged by 3!:

Port "qvo14f0a032-57"
            tag: 3
            Interface "qvo14f0a032-57"

As you can see, all of them are tagger with different ID.

2014-04-07 06:49:09 -0600 received badge  Famous Question (source)
2014-04-07 01:12:33 -0600 received badge  Popular Question (source)
2014-04-03 00:39:26 -0600 received badge  Student