Ask Your Question

briancline's profile - activity

2014-08-21 06:31:15 -0500 received badge  Necromancer (source)
2014-08-19 12:48:03 -0500 received badge  Taxonomist
2014-08-19 12:39:46 -0500 edited question Account HEAD failed: 401 Unauthorized

HI,

We have configured SWIFT as an object storage with HAVANA. All the services is working fine but we are not able to authenticate is with keystone. It is showing the below error. We have checked all the configuration but not able to figure out the issue.

Please help in providing the solution.

root@Storage:/etc/swift# swift stat Account HEAD failed: http://Storage.abc.com:8080/v1/AUTH_c.. . 401 Unauthorized

2014-08-19 12:32:44 -0500 edited question Swift not replicating deleted file in Devstack

hi I installed openstack using devstack with the following localrc file.

####enable swift
enable_service s-proxy s-object s-container s-account
SWIFT_REPLICAS=3
SWIFT_HASH=A1231312312
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-metering
enable_service q-lbaas
enable_service tempest

when i upload an object using swift it creates 3 copies of the file, but when i delete one copy the object is not replicated.

2014-08-19 12:30:28 -0500 edited answer Openstack-Swift performance problem

Given your use case, this sounds like a simple high-activity versus low-IOPS capacity problem. Essentially you're asking a relatively few number of drives to do quite a bit of work; it may not seem like a lot, but factor in writes of all replicas, account and container database retrieval/maintenance involved in each request, replication runs for account DBs/container DBs/objects, object-expirer scans, auditor runs, etc.).

Probably the least-hassle solution may simply be to add more raw IOPS capacity by adding more drives to the cluster (note that this does not necessarily mean more physical servers; only drives), as well as set up some monitoring and alerting to check these metrics frequently so you can anticipate the problem much sooner in the future.

You might counter this by arguing that you're nowhere near your total disk capacity and you don't need more disks, but more disk capacity isn't really the answer here since you're hitting a ceiling on the IOPS your existing disks provide. More drives and therefore more IOPS is going to be your best bet, unless you want to replace out these drives with expensive 10K or even 15K drives (although, assuming your activity continues to increase as it always does, there will be a point where you'll still hit the same raw IOPS limitations and have to scale out onto more drives anyway).

EDIT: I missed the tail end of your post the first time I wrote this. I know you mention SSDs are too expensive, however it sounds like you were ruling them out (rightly so) for storage of all your objects. You might reconsider using SSDs if you were to use them only for account and container database storage (essentially using only those SSDs in your account and container rings, rather than the same disks as your object ring uses). Using SSDs in this manner is a very common and battle-tested way of taking some load off the spindle-based disks storing your actual objects while realizing major performance gains in account- and container-related operations.

Using SSDs only for account and container databases won't require anywhere near as much space as your objects will, so may not be as cost-prohibitive as you thought; if you wanted to know your absolute minimum space requirement for SSDs for this purpose, you could run a bit of a cache-killing command du -sh /srv/node/*/{account,container} on each of your storage nodes to determine how much space those currently use.

2014-08-19 11:12:13 -0500 edited question openstack swift object put request

hi all,

according to my understanding of openstack swift object put request flow path, the request first goes to the proxy server, then proxy server forward the request to ObjectController defined in swift.proxy.controllers.obj.py. Here is some portion of lines of code of ObjectController.PUT() in swift.proxy.controllers.obj.py

def PUT(self, req):
        """HTTP PUT request handler."""
-----------
# do a HEAD request for container sync and checking object versions
if 'x-timestamp' in req.headers or \
                (object_versions and not
                 req.environ.get('swift_versioned_copy')):
            hreq = Request.blank(req.path_info, headers={'X-Newest': 'True'},
                                 environ={'REQUEST_METHOD': 'HEAD'})
            hresp = self.GETorHEAD_base(
                hreq, _('Object'), self.app.object_ring, partition,
                hreq.swift_entity_path)
-----------

in the following lines of code, self.GETorHEAD_base() create an HTTPConnection to object server for a HEAD request which is defined in ObjectController.HEAD() of swift.obj.server.py HEAD request retrieve the metadata of the object

My question is:

AT these lines of code, we are in process of writting an object, object is not written till yet on the node, then what will it return to hresp?

thanks in advance Pragya Jain

2014-03-03 11:42:42 -0500 received badge  Famous Question (source)
2013-09-12 19:31:17 -0500 received badge  Notable Question (source)
2013-09-03 09:05:14 -0500 received badge  Famous Question (source)
2013-08-19 18:31:55 -0500 received badge  Notable Question (source)
2013-07-29 13:53:00 -0500 commented answer How to set up metadata service on a flat network?

The nova-api host needs to route packets to/from which network, exactly?

2013-07-24 16:33:47 -0500 commented question What causes Metadata service to be very slow?

As a clarification, I *am* using cloud-init in my environment.

2013-07-24 16:33:47 -0500 received badge  Commentator
2013-07-24 16:32:01 -0500 commented question What causes Metadata service to be very slow?

I'm seeing this as well, except most of the metadata requests come back in 0.3 secs, whereas 3 or 4 of all the ones made by a single boot from a single instance take as long as the above (~8-17 secs). Also noticed that the quicker ones usually only hit once every ~2 seconds, rather than rapid-fire.

2013-07-10 02:45:44 -0500 received badge  Great Answer (source)
2013-06-24 11:31:33 -0500 received badge  Popular Question (source)
2013-06-21 16:02:13 -0500 asked a question Cloud images only bring up one NIC

I've noticed with both Ubuntu cloud images and CirrOS images that, even though I provide multiple vnics when creating a VM in Nova, they only bring up the first interface.

After booting, I'm able to confirm by checking the /etc/network/interfaces file, where only lo and eth0 are defined. This seems to be hard-coded for some reason.

When I add eth1, I'm able to bring it up, but this seems counterintuitive to the idea of spinning up ready-to-use VMs.

Is there any way whatsoever to get these to bring up each NIC instead of just one?

2013-05-22 05:54:22 -0500 received badge  Good Answer (source)
2013-05-14 15:50:38 -0500 received badge  Organizer (source)
2013-05-13 14:17:58 -0500 received badge  Good Question (source)
2013-05-13 14:17:26 -0500 received badge  Nice Answer (source)
2013-05-13 14:13:45 -0500 received badge  Nice Question (source)
2013-05-13 14:10:22 -0500 received badge  Nice Answer (source)
2013-05-13 14:10:01 -0500 received badge  Self-Learner (source)
2013-05-13 14:09:58 -0500 received badge  Popular Question (source)
2013-05-09 22:25:52 -0500 received badge  Scholar (source)
2013-05-09 22:25:22 -0500 answered a question Changing deletion block size in Cinder

D'oh. Apologies. I didn't realize that the new volume_dd_blocksize option didn't make it into Grizzly and is currently only available in trunk. I'm using Grizzly packages.

Nothing to see here...

2013-05-08 19:05:56 -0500 received badge  Student (source)
2013-05-07 22:13:20 -0500 answered a question Limit bandwidth usage of the replicas synchronisation

If I understand correctly, it sounds like you want to limit the throughput rate of only the replication between storage nodes, and not the data from proxy nodes.

If so, you can either use tc with multiple classes, placing proxy nodes into a specific class, and storage nodes into a different class with a lower priority and its own set of throughput limits. Depending on how you construct your tc ruleset, proxy-to-storage traffic can then be prioritized over storage-to-storage traffic (replication in this case). You could also put the aforementioned rate/throughput-limiting rules in place only for storage node IPs, so that traffic to/from proxies won't be limited.

The configuration value you're thinking of is probably replica count -- when you set up your rings, you can specify the number of replicas that are created. Each replica has throughput cost, as you mention, however you'll want to consider whether reducing the redundancy of data stored within the cluster is worth reducing the bandwidth spent to persist it. Personally, I'd recommend keeping replica count at 3 replicas (or more), and limiting/deprioritizing the throughput between storage nodes separately.

Hope this helps!

2013-05-07 16:51:30 -0500 asked a question Changing deletion block size in Cinder

I've got Cinder running atop an LVM volume group as described in the install guide. Everything works -- creating, attaching, detaching, deleting, etc.

However, upon deleting a 100GB Cinder volume, I noticed it took quite a long time due to zeroing out the bits on disk to prevent data leak once an overlapping volume gets created. The first test took about ~11 minutes for this step to complete.

I'd definitely like to speed this up a bit, if possible. I've tried adjusting the volume_dd_blocksize option in /etc/cinder/cinder.conf from the default 1M to 4M, and restarted all Cinder services. However, creating then deleting a volume still yields the same block size being used when it gets wiped (dd ... bs=1M).

Is there some other place this option needs to go? My configuration currently consists of:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
debug = True
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder

# MySQL Connection #
sql_connection=mysql://cinder:111p455w0rd@10.1.2.3/cinder

# RabbitMQ #
rabbit_password=111p455w0rd

# iSCSI and Volume Management #
iscsi_helper = tgtadm
iscsi_ip_address = 10.90.18.152
volume_name_template = volume-%s
volume_group = cinder-volumes
volume_dd_blocksize = 4M
2013-05-07 02:19:32 -0500 received badge  Good Answer (source)
2013-05-06 17:25:54 -0500 received badge  Nice Answer (source)
2013-05-06 09:27:35 -0500 received badge  Editor (source)
2013-05-06 09:27:11 -0500 answered a question Is it normal to see $(tenant_id)s when listing keystone_endpoints?

Yes, this is correct behavior. It gets substituted accordingly at run-time when a tenant requests a service/endpoint list.

This formatting makes it easier for services that require additional segments after the tenant ID, as well as for deployers who need to change the URL schemes a bit.

2013-05-06 09:23:16 -0500 commented question Why are nova-conductor and nova-compute not showing in multi-node setup?

Are there any errors or warnings in the /var/log/nova/nova-conductor.log and nova-compute.log files?

2013-05-03 18:08:09 -0500 received badge  Teacher (source)
2013-05-03 11:05:01 -0500 answered a question Why do I get "No portal found" error while attaching cinder volume to VM?

In addition to @obuisson's answer, here's a good checklist to run through to diagnose problems with attaching Cinder volumes:

  • Ensure the sysfsutils package is installed, which provides the systool command
  • Ensure the cinder-volumes LVM volume group is active if you're using LVM: vgchange -ay cinder-volumes
  • Determine whether you have both tgt and iscsitarget installed and running. If so, remove one, and use the --purge flag with apt-get to remove the other (i.e., apt-get remove --purge iscsitarget). Then, start or restart the one you wish to keep, if it's not already running.
  • Ensure iscsid is started with service open-iscsi status
  • Verify you get something back from iscsiadm -m discovery
  • If the iscsiadm discovery returned a result, take note of the IP address and and verify you don't get any errors from iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3260 (assuming 127.0.0.1:3260 is what you got from iscsiadm's discovery)
  • Check your /etc/cinder/cinder.conf file to ensure:
    • iscsi_helper is set to tgtadm if you're using tgt, or ietadm if you're using iscsitarget
    • iscsi_ip_address is set to the management-network (private) IP where you are running Cinder and tgt/iscsitarget
    • volume_group and volume_name_template are set correctly if using LVM
  • Check your /etc/nova/nova.conf file to ensure:
    • volume_api_class is set to nova.volume.cinder.API
    • iscsi_helper is set the same as it is in cinder.conf
    • iscsi_ip_address is set the same as it is in cinder.conf
    • volume_group and volume_name_template are the same as they are in cinder.conf, if using LVM
  • Restart all Cinder services: for ii in /etc/init.d/cinder-*; do restart $(basename $ii); done
  • Restart all Nova services: for ii in /etc/init.d/nova-*; do restart $(basename $ii); done
  • Test again with a brand new Cinder volume, since some of the info attached to volumes from previous tests may be wrong (it gets set at create time; so if the config is bad when a volume is created, the info in Cinder's DB will be bad for that volume even after you correct the config files).

I was receiving precisely the same error on a dev environment deploy yesterday and this seemed to be the most exhaustive set of things I could think of to run through to diagnose the issue. Ultimately I was able to get it working.

As always, be sure to take note of any change in the log messages each time you test a change. Good luck!

2013-05-01 20:37:47 -0500 answered a question Limit bandwidth usage of the replicas synchronisation

As I understand the issue, there's not a lot you can do in a non-crude way here, other than modifying the code. That being said, two of the easier (crude) options available to you are:

1) Use iptables to limit the packets/second rate (see the limit and hashlimit module documentation in iptables). Something like this on your storage nodes may do the trick:

iptables -A OUTPUT -p tcp --dport 6000 -m state --state RELATED,ESTABLISHED -m limit --limit 50/second --limit-burst 100/second -j ACCEPT

This effectively puts in place a rule that reduces the throughput rate of a connection to 50 packets/sec only after it sees that connection hit a rate of 100 packets/sec. Note that 50 and 100 here are sample values -- you'll have to determine what packets/sec rate is acceptable between object servers in your environment, and what the trigger threshold should be, but you get the idea.

2) Use tc (traffic control) to limit the actual throughput rate.

This is a bit more involved, but you'd essentially use iptables with -j MARK and --set-xmark to mark outbound packets on port 6000 with a specific class, then set up your tc rules to tell the kernel the maximum throughput rate for an interface, and the maximum throughput rate acceptable for packets marked with a specific mask. You can optionally get relatively fancy depending on the tc classes to make them adaptive based on current usage in other classes, and so forth.

Here's a few good links on tc if you wish to go this route: http://lartc.org/howto/ http://www.cyberciti.biz/faq/linux-tr... http://shearer.org/Linux_Shaping_Temp...

2013-05-01 17:01:18 -0500 commented question VM stops boot right after initializing tun

Can you post a link to the specific cookbook you're using, and the network related config info you're supplying to it?

2013-05-01 16:55:31 -0500 commented question I am setting up swift with 2 VMs

To whoever downvoted, it's not constructive to downvote a valid question. What's obvious to one may not be obvious to another...the whole idea of this site is to share that sort of knowledge and help others. Just add a comment if you need more info.

2013-04-10 10:34:27 -0500 commented question I can't create a network When the network is set to vlan

What release are you running--Essex, Folsom, or Grizzly?

2013-04-10 10:24:33 -0500 commented question Why Glance spend a long time connecting to MySQL?

Are any of your other services having the same problem keeping a MySQL connection open?

2013-04-10 10:23:20 -0500 commented question Why I can not login to the deployed vm after adding compute node

System-level accounts on instances aren't affected by Keystone. Are you using SSH keypairs on your instances, or password-based authentication?

2013-04-10 10:20:07 -0500 commented question Cant't ping my vm from controller node

Also, to make it easier for others to read the configurations you posted, try editing your post, highlighting the config blocks, and clicking the "101 010" button in the editor to prefix each of the lines with 4 spaces. Once you save, it causes the site to present these in a readable code block.

2013-04-10 10:18:49 -0500 commented question Cant't ping my vm from controller node

Are you able to ping 172.16.1.201 from an instance?