Ask Your Question

JD Hallen's profile - activity

2016-04-19 00:37:58 -0600 received badge  Famous Question (source)
2015-09-03 08:03:28 -0600 received badge  Famous Question (source)
2015-07-24 03:22:31 -0600 received badge  Self-Learner (source)
2015-07-24 03:22:28 -0600 received badge  Student (source)
2015-07-20 13:27:13 -0600 answered a question Post-Creation to HOT template

Ok, a couple of things I see here. 1) Your 'str-replace' lines needs to be indented, since its the "value" part of str-replace'. 2) Your 'template:' line and below that also needs to be indented. It should look like this:

user_data:
    str_replace: 
        template: |
            # code goes here...

3) The lines underneath the 'template: |' line are meant to be executed as a shell script...at least that is what I have seen in examples on the web. You have a YAML format, so that's going to cause errors I would guess.

When you get errors like this, be sure that check your heat-api.log file, as the error could be coming from the heat client trying to send in the stack-create command plus parameters via the api, and errors are happening there. I get this error also for one of my templates, and checking the heat-api.log file shows that its actually a "'str_replace' parameters must be a mapping" error (I have no 'params', and I suspect that is causing the issue.) Try running the heat stack-create with the '--debug' option to at least get a better debug output to go from.

2015-07-16 07:35:44 -0600 received badge  Scholar (source)
2015-07-15 03:08:53 -0600 received badge  Nice Answer (source)
2015-07-14 23:02:04 -0600 received badge  Self-Learner (source)
2015-07-14 23:02:04 -0600 received badge  Teacher (source)
2015-07-14 16:21:02 -0600 answered a question HTTPS not working to Instance using FloatingIP

Found the fix myself: MTU was set too high and causing packets to be dropped. The node was trying to exchange SSL keys, but since it was 50-bytes over the MTU limit, the packets were getting dropped on the other end of the GRE tunnel. Once I set the MTU for the network interface down to 1450 on the instances, everything worked correctly!

Cheers, JD

2015-07-14 16:19:00 -0600 received badge  Notable Question (source)
2015-07-14 16:19:00 -0600 received badge  Popular Question (source)
2015-07-14 16:17:28 -0600 answered a question Neutron Network node not Routing

Problem somehow corrected itself and everything routes correctly now....which is odd since I didn't change anything. Maybe someone hacked into my cloud and fixed it so they can correctly do their nefarious deeds ;) JD

2015-07-14 16:16:13 -0600 commented question Neutron Network node not Routing

Hi dbaxps... Suddenly everything is working correctly and routing is no longer an issue...I guess the magic OpenStack Networking fairy came down and fixed it for me. Still looking at logs to see if something jumps out as the root cause. Thanks for your help! JD

2015-07-14 16:13:38 -0600 answered a question Connection to instance dies on 'large' packets in Juno

Ok, figured out the issue and posting here in case someone else runs into this issue.

I always figured it had something to do with the networking, since I could always get connections, just not 'large' outputs.

My OpenStack Juno cloud has the "default" three nodes: Network, Compute, Controller. I went with the default GRE Tunnel between the Network and the Compute nodes. GRE Tunnels take out a little under 50 bytes of overhead away from the MTU. When I looked at my instances' interface settings, I noticed that it was still set to 1500! For some reason it was not adjusting the end-to-end MTU like I thought it would. Once I brought it down to 1450, everything worked like a champ. I thought linux would auto-adjust MTU, but for my cloud, it did not. All my instances have been adjusted and everything is working correctly now.

JD

2015-07-12 14:40:59 -0600 received badge  Famous Question (source)
2015-07-12 14:40:59 -0600 received badge  Notable Question (source)
2015-07-12 14:40:54 -0600 received badge  Notable Question (source)
2015-07-10 13:02:16 -0600 received badge  Popular Question (source)
2015-07-10 05:37:44 -0600 received badge  Popular Question (source)
2015-07-09 12:59:32 -0600 commented question Neutron Network node not Routing

dbaxps: which one?

# ip netns
qdhcp-a29141d5-585d-463f-932b-9860f90d6b14
qdhcp-e5e9dcd0-a93e-4f18-8217-0bc89a650a39
qdhcp-71932895-a158-4ca4-89ca-03305f918f14
qrouter-c939264f-e3ee-45d5-a885-11ad94e04c12
2015-07-09 09:37:54 -0600 asked a question Neutron Network node not Routing

Pulling my hair out on this one.... I can't get my Neutron network node to do a simple route of a packet to my controller node! I'm sure its something stupid, but I can't see it I guess. /proc/sys/net/ipv4/ip_forward has '1' as its content. /etc/sysctl.conf looks like this:

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.ip_forward=1

I can get to all my instances on my compute nodes, so I know the bridging/tunneling is working. All three networks (External on br-ex, Tunnel on eth2, Management on eth0, eth1 assigned to br-ex) show up in my 'ip a' output. I can ping the Compute and Controller nodes just fine from the Network node, but outside connections coming in on the External network stop at the Network node.

What am I doing wrong?? Thanks for your time! JD

UPDATE1:

# ovs-vsctl show
3ec0fc70-90ce-4e89-818f-5fdab99bf08a
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tap221f3067-b6"
            tag: 3
            Interface "tap221f3067-b6"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap114ed438-14"
            tag: 1
            Interface "tap114ed438-14"
                type: internal
        Port "tapac8c15d2-cc"
            tag: 2
            Interface "tapac8c15d2-cc"
                type: internal
        Port "qr-dec6f704-48"
            tag: 3
            Interface "qr-dec6f704-48"
                type: internal
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "qg-710689ae-67"
            Interface "qg-710689ae-67"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth1"
            Interface "eth1"
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-0a000119"
            Interface "gre-0a000119"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="10.0.1.28", out_key=flow, remote_ip="10.0.1.25"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.0.2"


# ifconfig
br-ex     Link encap:Ethernet  HWaddr 00:1e:90:13:9a:f6
          inet addr:10.147.29.28  Bcast:10.147.29.255  Mask:255.255.255.0
          inet6 addr: fe80::b803:34ff:fe88:d5c3/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:10273 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5682 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1114956 (1.1 MB)  TX bytes:1168138 (1.1 MB)

br-int    Link encap:Ethernet  HWaddr c2:81:7e:1a:7f:4f
          inet6 addr: fe80::8401:81ff:fe90:8376/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:68 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5148 (5.1 KB)  TX bytes:648 (648.0 B)

br-tun    Link encap:Ethernet  HWaddr 0e:44:0c:98:9f:49
          inet6 addr: fe80::dc18:d2ff:fec6:2d2c/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX ...
(more)
2015-07-09 09:30:04 -0600 received badge  Editor (source)
2015-07-09 09:26:39 -0600 asked a question Connection to instance dies on 'large' packets in Juno

Juno installed according to the manual instructions w/Neutron networking. Instances come up fine with no errors, gets all its DHCP-assigned IP addresses, Floating IP assigned -- all with no errors. I connect to any instance via SSH and as long as the output from the instance takes no more than one or two packets, the connection is fine. If I try to do something like 'ls -al /var/log', it hangs and the session times out. I can reconnect just fine, there are no errors in either the instance logs, nor in the Neutron logs that I can see. I've tried this on four different instances (CentOS 7, Ubuntu 14.04.2, Ubuntu 15.04, and F5 BIG-IP v11.6.0) and all behave the same way. I also tried it on the default Cirros image, but can't reproduce the problem there...not sure if the output is long enough to trigger the issue or not.

Not sure, but this might tie into my other current networking issue where I can't get any HTTPS connection to one of my instance. (its question # 69288 -- I would just post the link, but since I have no 'karma' on this site, I'm not allowed.)

Thanks for your help! JD

2015-07-09 09:12:28 -0600 received badge  Enthusiast
2015-07-02 12:32:27 -0600 asked a question HTTPS not working to Instance using FloatingIP

I actually have two networking problems, but the more pressing one first: From an outside node, or the Controller node for that matter, I cannot access any HTTPS ports on my instances.

Configuration: 3-node (Controller, Network, Compute) Ubuntu Juno OpenStack. This was installed using the default, manual install with Neutron networking as documented on the docs.openstack.org website. No errors in logs, Cirros instance launches with full SSH access. CentOS7 instance launches with SSH working until I try a "large" output (ls -al of a big directory hangs about 20 lines in [IE> the second network problem]). F5 Networks BIG-IP VE instance launches with SSH working until I try a "large" output. Both the CentOS7 and the BIGIP keep running, and I have full console access at all times. No errors reported in either instance log files. All nodes get all their DHCP assigned IPs. 'Default' security group setup like so:

    # nova secgroup-list-rules default
    +-------------+-----------+---------+-----------+--------------+
    | IP Protocol | From Port | To Port | IP Range  | Source Group |
    +-------------+-----------+---------+-----------+--------------+
    |             |           |         |           | default      | 
    | tcp         | 22        | 22      | 0.0.0.0/0 |              |
    | tcp         | 443       | 443     | 0.0.0.0/0 |              |
    |             |           |         |           | default      | 
    | icmp        | -1        | -1      | 0.0.0.0/0 |              |
    | tcp         | 80        | 80      | 0.0.0.0/0 |              |
    +-------------+-----------+---------+-----------+--------------+

When I do an tcpdump from the BIGIP node, I can see the HTTPS packets coming in, and a response going out. If I create a mirror port on the 'br-int' bridge on the compute node, I just see the responses going out to my client, but no requests coming in?!? That doesn't seem right!

11:25:38.966302 IP6 fe80::bc29:bff:fe04:e7d5 > ip6-allrouters: ICMP6, router solicitation, length 16<br>
11:25:39.231659 IP 10.10.10.10.ssh > 10.147.95.128.57127: Flags [P.], seq 2501911056:2501911092, ack 4236512516, win 241, options [nop,nop,TS val 81901769 ecr 275729760], length 36<br>
11:25:42.974288 IP6 fe80::bc29:bff:fe04:e7d5 > ip6-allrouters: ICMP6, router solicitation, length 16<br>
11:25:49.236382 IP 10.10.10.10.ssh > 10.147.95.128.57127: Flags [P.], seq 36:72, ack 53, win 241, options [nop,nop,TS val 81911774 ecr 275739723], length 36<br>
11:25:51.810641 IP 10.10.10.8.https > 10.147.95.128.63978: Flags [S.], seq 1016497018, ack 1398343447, win 14480, options [mss 1460,sackOK,TS val 242030834 ecr 275742177,nop,wscale 7], length 0<br>
11:25:51.813968 IP 10.10.10.8.https > 10.147.95.128.63978: Flags [.], ack 211, win 122, options [nop,nop,TS val 242030838 ecr 275742190], length 0<br>
11:25:51.832698 IP 10.10.10.8.https > 10.147.95.128.63978: Flags [.], seq 1:1449, ack 211, win 122, options [nop,nop,TS val 242030857 ecr 275742190], length 1448<br>
11:25:51.832743 IP 10.10.10.8.https > 10.147.95.128.63978: Flags [P.], seq 1449:1654, ack 211, win 122, options [nop,nop,TS val 242030857 ecr 275742190], length 205<br>
11:25:52 ...
(more)