Ask Your Question

hossam's profile - activity

2017-10-17 05:37:41 -0500 received badge  Famous Question (source)
2017-04-04 03:54:17 -0500 received badge  Notable Question (source)
2017-01-24 16:45:16 -0500 commented answer network problem- physical network VLANs (two physical eth*)

yes i will start new installation from scratch using your setup in http://lxer.com/module/newswire/view/236067/index.html (link text)

i think i did wrong configuration before

thanks a lot for your help and support

2017-01-23 15:33:14 -0500 commented answer network problem- physical network VLANs (two physical eth*)

hello dbaxps. thanks for your replay it doesn't work with me also but i will start fresh install and do another test

2017-01-22 22:30:22 -0500 received badge  Popular Question (source)
2017-01-22 00:16:07 -0500 asked a question network problem- physical network VLANs (two physical eth*)

hello im trying two physical network cards with RDO allinone test environment. im successful with attaching the two network cards to openstack and can provide openstack VMS with ips from both network cards (br-ex&br-ex1) my problem is i can reach br-ex and br-ex1 ips from any vlan in network but when login ssh to rdo node and ping any vlan not (br-ex and br-ex1) i cant .

my configuration

br-ex   :-10.13.43.22  
br-ex1 :-10.13.44.22
br-ex GW :- 10.13.43.1 
br-ex1 GW :-10.13.44.1

[root@RDO network-scripts]# cat ifcfg-eth0
DEVICE=eth0
NAME=eth0
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ex
ONBOOT=yes
BOOTPROTO=none
------------------------------------------------------------------------------
[root@RDO network-scripts]# cat ifcfg-eth1
DEVICE=eth1
NAME=eth1
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-ex1
ONBOOT=yes
BOOTPROTO=none
------------------------------------------------------------------------------
[root@RDO network-scripts]# cat ifcfg-br-ex
ONBOOT=yes
IPADDR=10.13.43.22
NETMASK=255.255.255.0
NETWORK=10.13.43.0
VLAN=yes
DEVICE=br-ex
NAME=br-ex
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge
------------------------------------------------------------------------------
[root@RDO network-scripts]# cat ifcfg-br-ex1
ONBOOT=yes
IPADDR=10.13.44.22
NETMASK=255.255.255.0
NETWORK=10.13.44.0
VLAN=yes
DEVICE=br-ex1
NAME=br-ex1
DEVICETYPE=ovs
OVSBOOTPROTO=static
TYPE=OVSBridge

[root@RDO network-scripts]# cat /etc/iproute2/rt_tables 
#
# reserved values
#
255 local
254 main
253 default
0   unspec
#
# local
#
#1  inr.ruhep
200 VLAN43
201 VLAN44

[root@RDO network-scripts]# cat route-br-ex
default via 10.13.43.1 dev br-ex table VLAN43

[root@RDO network-scripts]# cat route-br-ex1
default via 10.13.44.1 dev br-ex1 table VLAN44

 [root@RDO network-scripts]# cat rule-br-ex
    from 10.13.43.22/24 table VLAN43

    [root@RDO network-scripts]# cat rule-br-ex1
    from 10.13.44.22/24 table VLAN44

from the RDO node i can ping any ip in VLAN 43 {ping 10.13.43.x} or VLAN 44 {ping 10.13.44.x} but if i ping any other vlan i got network unreachable like dns server {ping 10.13.10.10} please help me with that configuration

Thanks.

2017-01-17 01:33:51 -0500 received badge  Famous Question (source)
2017-01-06 00:40:45 -0500 received badge  Famous Question (source)
2016-11-22 09:59:09 -0500 received badge  Notable Question (source)
2016-11-15 10:06:29 -0500 received badge  Notable Question (source)
2016-11-13 23:04:02 -0500 received badge  Popular Question (source)
2016-11-13 17:47:55 -0500 answered a question cant access instance ip from any ip outside cloud ips

hello , Thanks a lot for your help,

======= @Bernd Bausch for the security groups i didnt check but i will work with it now and update you with the status.

also i had fix the network access with first compute node which also network node and controller (allinone) by following this link http://docs.openstack.org/juno/install-guide/install/apt/content/neutron-compute-node.html https://s21.postimg.org/rh1chtclj/openstack2.png but still no luck with other compute nodes.

@dbaxps network_topology https://s21.postimg.org/vpitpxz3b/open.png

[root@server1 ~(keystone_admin)]# neutron port-list 
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                          |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| 39f5d246-90c1-4eb2-965c-da55a1f4728a |      | fa:16:3e:a5:39:04 | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.73"} |
| cae2a36b-9d17-4e4c-9d96-ff9cc01cf845 |      | fa:16:3e:dd:73:7f | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.74"} |
| de6f9c7b-4823-4de1-9bd1-9d33a7c63a59 |      | fa:16:3e:35:b5:22 | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.70"} |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
[root@server1 ~(keystone_admin)]# neutron router-list 
+--------------------------------------+---------+------------------------------------------------------------------------+-------------+-------+
| id                                   | name    | external_gateway_info                                                  | distributed | ha    |
+--------------------------------------+---------+------------------------------------------------------------------------+-------------+-------+
| 3790c404-78dd-4414-9126-18c64c626ebc | router1 | {"network_id": "c3daeb74-4337-412f-a0f2-8ae71645d16f", "enable_snat":  | False       | False |
|                                      |         | true, "external_fixed_ips": [{"subnet_id":                             |             |       |
|                                      |         | "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.70"}]} |             |       |
+--------------------------------------+---------+------------------------------------------------------------------------+-------------+-------+
[root@server1 ~(keystone_admin)]# neutron router-port-list router1 
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                          |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| de6f9c7b-4823-4de1-9bd1-9d33a7c63a59 |      | fa:16:3e:35:b5:22 | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.70"} |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
2016-11-13 02:29:21 -0500 asked a question cant access instance ip from any ip outside cloud ips

hello all ,

I have multi-node setup with packstack instances work with ip ok i can access any ip in any vlan in my network from any instance (tested with ssh) and can access instance ip from another instance ip . the problem is i cant access the instance ip from any ip not instance example . compute node :- 10.13.43.50 centos instance :- 10.13.43.73

from 10.13.43.73 i can ping or ssh to 10.13.43.50 with out problems. from 10.13.43.50 i can't access any instances ip with any protocol .

i install OpenStack allinone the i used the answer file to add more compute node firewall and SELinux disabled in all

  [root@server1 ~(keystone_admin)]# neutron net-list
  +--------------------------------------+------------------+----------------------------------------------------+
  | id | name | subnets |
  +--------------------------------------+------------------+----------------------------------------------------+
  | c3daeb74-4337-412f-a0f2-8ae71645d16f | external_network | 2a968b98-4fce-4669-96b1-d4b73480bc53 10.13.43.0/24 |
  +--------------------------------------+------------------+----------------------------------------------------+
  [root@server1 ~(keystone_admin)]# neutron subnet-list
  +--------------------------------------+---------------+---------------+-------------------------------------------------+
  | id | name | cidr | allocation_pools |
  +--------------------------------------+---------------+---------------+-------------------------------------------------+
  | 2a968b98-4fce-4669-96b1-d4b73480bc53 | public_subnet | 10.13.43.0/24 | {"start": "10.13.43.65", "end": "10.13.43.239"} |
  +--------------------------------------+---------------+---------------+-------------------------------------------------+
  [root@server1 ~(keystone_admin)]# neutron port-list
  +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
  | id | name | mac_address | fixed_ips |
  +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
  | 39f5d246-90c1-4eb2-965c-da55a1f4728a | | fa:16:3e:a5:39:04 | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.73"} |
  | cae2a36b-9d17-4e4c-9d96-ff9cc01cf845 | | fa:16:3e:dd:73:7f | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.74"} |
  | de6f9c7b-4823-4de1-9bd1-9d33a7c63a59 | | fa:16:3e:35:b5:22 | {"subnet_id": "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": "10.13.43.70"} |
  +--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
  [root@server1 ~(keystone_admin)]# neutron router-list
  +--------------------------------------+---------+---------------------------------------------------------------+-------------+-------+
  | id | name | external_gateway_info | distributed | ha |
  +--------------------------------------+---------+---------------------------------------------------------------+-------------+-------+
  | 3790c404-78dd-4414-9126-18c64c626ebc | router1 | {"network_id": "c3daeb74-4337-412f-a0f2-8ae71645d16f", | False | False |
  | | | "enable_snat": true, "external_fixed_ips": [{"subnet_id": | | |
  | | | "2a968b98-4fce-4669-96b1-d4b73480bc53", "ip_address": | | |
  | | | "10.13.43.70"}]} | | |
  +--------------------------------------+---------+---------------------------------------------------------------+-------------+-------+
  [root@server1 ~(keystone_admin)]# neutron floatingip-list

  [root@server1 ~(keystone_admin)]# ovs-vsctl show
  8b0bf079-d523-464e-b7c6-1a7ae4c46ebd
  Manager "ptcp:6640:127.0.0.1"
  is_connected: true
  Bridge br-tun
  Controller "tcp:127.0.0.1:6633"
  is_connected: true
  fail_mode: secure
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-0a0d2b35"
  Interface "vxlan-0a0d2b35"
  type: vxlan
  options: {df_default="true", in_key=flow, local_ip="10.13.43.50", out_key=flow, remote_ip="10.13.43.53"}
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-0a0d2b33"
  Interface "vxlan-0a0d2b33"
  type: vxlan
  options: {df_default="true", in_key=flow, local_ip="10.13.43.50", out_key=flow, remote_ip="10.13.43.51"}
  Port "vxlan-0a0d2b34"
  Interface "vxlan-0a0d2b34"
  type: vxlan
  options: {df_default="true", in_key=flow, local_ip="10.13.43.50", out_key=flow, remote_ip="10.13.43.52"}
  Port "vxlan-0a0d2b36"
  Interface "vxlan-0a0d2b36"
  type: vxlan
  options: {df_default="true", in_key=flow, local_ip="10.13.43.50", out_key=flow, remote_ip="10.13.43.54"}
  Bridge br-ex
  Controller "tcp:127.0.0.1:6633"
  is_connected: true
  fail_mode: secure
  Port "eth0"
  Interface "eth0"
  Port br-ex
  Interface br-ex
  type: internal
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Bridge br-int
  Controller "tcp:127.0.0.1:6633"
  is_connected: true
  fail_mode: secure
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}
  Port br-int
  Interface br-int
  type: internal
  Port "qvo39f5d246-90"
  tag: 1
  Interface "qvo39f5d246-90"
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  ovs_version: "2.5.0"
2016-11-10 17:42:29 -0500 received badge  Popular Question (source)
2016-11-10 04:37:15 -0500 asked a question [Error: No valid host was found.((Network br-ex error))

hello all

i had try rdo Newton with allinone setup after that i had add more compute nodes with answer file. i had used bridge network br-ex to acces external network i can create instance in compute node 1 which was in (allinone) with network working using br-ex without problems but all other compute nodes i got error

Error: Failed to perform requested operation on instance "CN5", the instance has an error status: Please try again later [Error: No valid host was found. There are not enough hosts available.].

error log in compute servers (nova-compute.log)

   2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager [req-36a99f5d-030a-4c32-9ef8-80816b93c0c3 4f11ca0729044a638bba62c8f5457854 b1aadb9f484d4014818e30845a93b041 - - -] Instance failed network setup after 1 attempt(s)
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager Traceback (most recent call last):
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1397, in _allocate_network_async
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     bind_host_id=bind_host_id)
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 881, in allocate_for_instance
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     bind_host_id, dhcp_opts, available_macs)
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1000, in _update_ports_for_instance
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     vif.destroy()
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     self.force_reraise()
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     six.reraise(self.type_, self.value, self.tb)
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 970, in _update_ports_for_instance
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     port_client, instance, port_id, port_req_body)
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 437, in _update_port
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     _ensure_no_port_binding_failure(port)
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 175, in _ensure_no_port_binding_failure
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager     raise exception.PortBindingFailed(port_id=port['id'])
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager PortBindingFailed: Binding failed for port 5747117c-f4a5-4fed-b9c6-10241db4eb28, please check neutron logs for more information.
2016-11-10 13:06:03.024 3077 ERROR nova.compute.manager 
2016-11-10 13:06:03.026 3077 ERROR nova.compute.manager [req-36a99f5d-030a-4c32-9ef8-80816b93c0c3 4f11ca0729044a638bba62c8f5457854 b1aadb9f484d4014818e30845a93b041 - - -] [instance: 8460ab28-a586-4f2e-8f70-fad48215c290] Instance failed to spawn
2016-11-10 13:06:03.026 3077 ERROR nova.compute.manager [instance: 8460ab28-a586-4f2e-8f70-fad48215c290 ...
(more)
2016-08-03 08:41:13 -0500 received badge  Famous Question (source)
2016-06-06 03:34:07 -0500 received badge  Notable Question (source)
2016-05-26 15:10:46 -0500 received badge  Popular Question (source)
2016-05-22 06:47:36 -0500 answered a question unable to attach cinder volume - iscsi require authentication

hello all Fixed now

just login to cinder node

targetcli

cd iscsi/iqn.2010-10.org.openstack:volume-7eec84a4-b961-4ca2-b9a6-d033684102af/tpg1/

set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1

exit

systemctl restart openstack-cinder-volume.service target.service

cinder log

2016-05-22 07:08:41.476 18098 INFO cinder.rpc [req-21fa2c64-3b68-42db-99e9-7380e58f38c7 - - - - -] Automatically selected cinder-scheduler objects version 1.3 as minimum service version. 2016-05-22 07:08:41.482 18098 INFO cinder.rpc [req-21fa2c64-3b68-42db-99e9-7380e58f38c7 - - - - -] Automatically selected cinder-scheduler RPC version 2.0 as minimum service version. 2016-05-22 07:08:41.544 18098 INFO cinder.volume.manager [req-21fa2c64-3b68-42db-99e9-7380e58f38c7 - - - - -] Determined volume DB was not empty at startup. 2016-05-22 07:08:41.638 18098 INFO cinder.volume.manager [req-21fa2c64-3b68-42db-99e9-7380e58f38c7 - - - - -] Image-volume cache disabled for host computnode01@lvm. 2016-05-22 07:08:41.640 18098 INFO oslo_service.service [req-21fa2c64-3b68-42db-99e9-7380e58f38c7 - - - - -] Starting 1 workers 2016-05-22 07:08:41.646 18112 INFO cinder.service [-] Starting cinder-volume node (version 8.0.0) 2016-05-22 07:08:41.648 18112 INFO cinder.volume.manager [req-afe8d74a-3383-43de-b320-e850d209d0ed - - - - -] Starting volume driver LVMVolumeDriver (3.0.0) 2016-05-22 07:08:42.737 18112 INFO cinder.volume.manager [req-afe8d74a-3383-43de-b320-e850d209d0ed - - - - -] Driver initialization completed successfully. 2016-05-22 07:08:42.779 18112 INFO cinder.volume.manager [req-afe8d74a-3383-43de-b320-e850d209d0ed - - - - -] Initializing RPC dependent components of volume driver LVMVolumeDriver (3.0.0) 2016-05-22 07:08:43.032 18112 INFO cinder.volume.manager [req-afe8d74a-3383-43de-b320-e850d209d0ed - - - - -] Driver post RPC initialization completed successfully.

client login

root@host-10-13-43-83 ~]# iscsiadm -m node --login Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-7eec84a4-b961-4ca2-b9a6-d033684102af, portal: 10.13.43.50,3260] (multiple) Login to [iface: default, target: iqn.2010-10.org.openstack:volume-7eec84a4-b961-4ca2-b9a6-d033684102af, portal: 10.13.43.50,3260] successful.

[root@host-10-13-43-83 ~]# cat /proc/partitions major minor #blocks name

8 0 41943040 sda 8
1 512000 sda1 8 2
20458496 sda2 253 0 18358272 dm-0 253 1 2097152 dm-1
8 16 1048576 sdb

Thanks all

2016-05-21 11:11:49 -0500 received badge  Scholar (source)
2016-05-21 11:07:24 -0500 asked a question unable to attach cinder volume - iscsi require authentication

hello all when trying to attach volume to running instance using iscsi

creating volume log

-05-21 11:50:16.867 3929 INFO cinder.volume.manager [req-dbfbf90c-9b7e-4ba8-8a02-f8ad755c8c24 b7bd56070ce944d88a2213ce040b405e 1600b0d57d514704a5e4d23770f3265c - - -] Created volume successfully. 2016-05-21 11:51:01.544 3929 INFO cinder.volume.targets.lio [req-63d78ec6-ce44-4e70-95c2-f7937a155d09 046946022fef4d55accf1731a3fdd5f7 1600b0d57d514704a5e4d23770f3265c - - -] Creating iscsi_target for volume: volume-7eec84a4-b961-4ca2-b9a6-d033684102af 2016-05-21 11:51:02.676 3929 INFO cinder.volume.manager [req-63d78ec6-ce44-4e70-95c2-f7937a155d09 046946022fef4d55accf1731a3fdd5f7 1600b0d57d514704a5e4d23770f3265c - - -] Initialize volume connection completed successfully. 2016-05-21 11:51:04.336 3929 INFO cinder.volume.manager [req-698a800f-b421-4a42-bc64-b2b703e47c75 046946022fef4d55accf1731a3fdd5f7 1600b0d57d514704a5e4d23770f3265c - - -] Terminate volume connection completed successfully.

from instance i got error

root@host-10-13-43-83 ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-7eec84a4-b961-4ca2-b9a6-d033684102af, portal: 10.13.43.50,3260] (multiple) iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-7eec84a4-b961-4ca2-b9a6-d033684102af, portal: 10.13.43.50,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals

my cinder cinder.conf

iscsi_helper=lioadm volume_group=cinder-volumes1 iscsi_ip_address=10.13.43.50 volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver volumes_dir=/var/lib/cinder/volumes volume_backend_name=lvm

OS-CENTOS7

can i remove this authentication and how to attach direct to instance without instance configuration side

Thanks,

2016-05-04 19:39:26 -0500 commented answer glance endpoint

Thanks . working

2016-05-04 19:37:58 -0500 received badge  Famous Question (source)
2016-04-30 09:54:33 -0500 received badge  Popular Question (source)
2016-04-30 09:54:33 -0500 received badge  Notable Question (source)
2016-04-27 02:57:18 -0500 received badge  Enthusiast
2016-04-26 11:19:30 -0500 asked a question glance endpoint

hello im trying to install openstack multi node

controller (keystone- mariadb-rabitt-horizon)
10.13.43.50  -----public ip
10.1.1.50      -----private ip

glance server
10.13.43.51   -----public ip
10.1.1.51       -----private ip

10.13  accessible network
10.1    private network between servers only

should the end point to be like that
openstack endpoint create --region RegionOne image public http://10.13.43.51:9292 
openstack endpoint create --region RegionOne image internal http://10.1.1.51:9292 
openstack endpoint create --region RegionOne image admin http://10.1.1.51:9292

or i can only use 10.1.1.51 ip only

Thanks,

2016-04-25 19:00:38 -0500 received badge  Supporter (source)