Ask Your Question

gucluakkaya's profile - activity

2017-04-08 04:36:33 -0500 received badge  Notable Question (source)
2017-04-08 04:36:33 -0500 received badge  Famous Question (source)
2017-04-08 04:36:33 -0500 received badge  Popular Question (source)
2016-08-16 23:20:37 -0500 received badge  Notable Question (source)
2016-08-16 23:20:37 -0500 received badge  Famous Question (source)
2016-02-13 20:23:53 -0500 received badge  Notable Question (source)
2016-02-13 20:23:53 -0500 received badge  Popular Question (source)
2015-02-22 00:38:49 -0500 received badge  Popular Question (source)
2012-11-16 08:28:32 -0500 answered a question Failover principle of Swift

Thank you for your answer. It seems like our application inserting and retrieving containers had some problem. You are right that for 50% of cluster being offline some object cannot be retrieved and after more test i verified that failover work properly, if one node is down swift will look for another node.

Sorry for the inconvenience.

2012-11-15 17:09:00 -0500 answered a question Failover principle of Swift

Sorry for the previous comment accidently pressed solved button. Here is my account,container and object rings.

id zone ip address port name weight partitions balance meta 0 1 ip1 6002 sdb1 100.00 131072 0.00 1 2 ip2 6002 sdb1 100.00 131072 0.00 2 3 ip3 6002 sdb1 100.00 131072 0.00 3 4 ip4 6002 sdb1 100.00 131072 0.00 4 5 ip5 6002 sdb1 100.00 131072 0.00 5 6 ip6 6002 sdb1 100.00 131072 0.00

         0 1 ip1 6001 sdb1 100.00 131072 0.00
         1 2 ip2 6001 sdb1 100.00 131072 0.00
         2 3 ip3 6001 sdb1 100.00 131072 0.00
         3 4 ip4 6001 sdb1 100.00 131072 0.00
         4 5 ip5 6001 sdb1 100.00 131072 0.00
         5 6 ip6 6001 sdb1 100.00 131072 0.00

         0 1 ip1  6000 sdb1 100.00 131072 0.00
         1 2 ip2 6000 sdb1 100.00 131072 0.00
         2 3 ip3 6000 sdb1 100.00 131072 0.00
         3 4 ip4 6000 sdb1 100.00 131072 0.00
         4 5 ip5 6000 sdb1 100.00 131072 0.00
         5 6 ip6 6000 sdb1 100.00 131072 0.00
2012-11-15 17:04:34 -0500 answered a question Failover principle of Swift

id zone ip address port name weight partitions balance meta 0 1 10.0.0.208 6002 sdb1 100.00 131072 0.00 1 2 10.0.0.207 6002 sdb1 100.00 131072 0.00 2 3 10.0.0.206 6002 sdb1 100.00 131072 0.00 3 4 10.0.0.205 6002 sdb1 100.00 131072 0.00 4 5 10.0.0.204 6002 sdb1 100.00 131072 0.00 5 6 10.0.0.132 6002 sdb1 100.00 131072 0.00

0 1 10.0.0.208 6001 sdb1 100.00 131072 0.00 1 2 10.0.0.207 6001 sdb1 100.00 131072 0.00 2 3 10.0.0.206 6001 sdb1 100.00 131072 0.00 3 4 10.0.0.205 6001 sdb1 100.00 131072 0.00 4 5 10.0.0.204 6001 sdb1 100.00 131072 0.00 5 6 10.0.0.132 6001 sdb1 100.00 131072 0.00

         0     1      10.0.0.208  6000      sdb1 100.00     131072    0.00 
         1     2      10.0.0.207  6000      sdb1 100.00     131072    0.00 
         2     3      10.0.0.206  6000      sdb1 100.00     131072    0.00 
         3     4      10.0.0.205  6000      sdb1 100.00     131072    0.00 
         4     5      10.0.0.204  6000      sdb1 100.00     131072    0.00 
         5     6      10.0.0.132  6000      sdb1 100.00     131072    0.00
2012-11-15 16:58:52 -0500 asked a question Failover principle of Swift

Hi,

During my test i came across following behaviour:

My environment consists of 1 proxy node and 6 storage nodes (account/container/object). I close three nodes and tried to make some gets to certain objects. I have discovered that swift proxy tried to reach the nodes which are closed thus receiving connection timeouts. I guess i did not configure my ring properly against failover. What am i missing in my configuration? You can see my configuration below:

ring configuration with partition power 18 replication 3

account-server.conf

[DEFAULT] bind_ip = 0.0.0.0 workers = 8

[pipeline:main] pipeline = account-server

[app:account-server] use = egg:swift#account

[account-replicator] run_pause=900

[account-auditor]

[account-reaper]

container-server.conf

[DEFAULT] bind_ip = 0.0.0.0 workers = 8

[pipeline:main] pipeline = container-server

[app:container-server] use = egg:swift#container

[container-replicator] run_pause=900

[container-updater]

[container-auditor]

object-server.conf

[DEFAULT] bind_ip = 0.0.0.0 workers = 8

[pipeline:main] pipeline = object-server

[app:object-server] use = egg:swift#object

[object-replicator] run_pause=900 ring_check_interval=900

[object-updater]

[object-auditor]

2012-11-14 17:55:28 -0500 answered a question Openstack as CDN

Thanks David Goetz, that solved my question.

2012-11-14 17:55:07 -0500 answered a question Openstack as CDN

Thank you for your answer, i will look into it.

2012-11-14 13:13:30 -0500 asked a question Openstack as CDN

Hi all,

For the openstack document i saw the following line here:

"Another use for object storage solutions is as a content delivery network (CDN) for hosting static web content (e.g., images, and media files), since object storage already provides an HTTP interface."

I know there are similar question ask two years ago: https://answers.launchpad.net/swift/+question/121949 (https://answers.launchpad.net/swift/+...) https://answers.launchpad.net/swift/+question/136903 (https://answers.launchpad.net/swift/+...)

Is it still the same or is there a configuration to integrate openstack with an existing cdn provider?

Thanks

2012-11-02 06:58:34 -0500 answered a question Preferred Swift partition size and count per Storage Node

So this value ( 3*43690) is pretty much more than 100. From this i understand that replication process will take much longer and this may affect performance. In order to overcome this problem i need to increase my disk count. Which is a better practice, increasing disk count without adding more storage nodes or scale out the whole cluster with storage nodes with one disk per node? Furthermore i need to add the following info that for storage node we use virtual machines, meaning that there is actually one physical disk, which is virtually separated and mount to each VM. Is the calculation for swift partitions legit for virtualized disks?

2012-11-01 06:46:52 -0500 answered a question Preferred Swift partition size and count per Storage Node

Thanks for your answer. Just for confirmation currently my cluster have a total of 6 disks. Since i gave the partition power of 18. 2^18 make it 262144 partitions in total. From your explanation does mean that the swift partition per disk count is 262144/6 = 43690 or did i misunderstand your explanation about swift partitions?

2012-10-23 06:19:33 -0500 answered a question about vm_test_mode parameter in account and object replicators

Thanks Samuel Merritt, that solved my question.

2012-10-22 06:14:12 -0500 asked a question about vm_test_mode parameter in account and object replicators

Hi all,

It will be a short question. What is the impact on a storage node in a VM environment if i leave vm_test_mode to its default value as 'no'? Does it affect performance of Storage replication process or is it just a flag for other purposes like logging or statistics?

Thanks

2012-10-22 06:04:02 -0500 asked a question Preferred Swift partition size and count per Storage Node

Hi all,

We plan to build an environment with Openstack swift as our storage. We need to plan our deployment model and allocate resources for that. Currently we take the sample from the documents and deployment 1 proxy node and 5 Storage nodes.

While building the rings we give 18 for partition power and 3 for replication factor. Every storage node has a disk allocated for swift storage and has one primatry partition with size of 100 GB.

Since i do not know exactly how the replication works, i can not forsee what kind of impact does partition count,size and replication factor have on the storage node.

What can you recommend about partition size and replication factor, if i intend to work with 5 storage nodes with one disk each on a read and write intensive environment?

2012-10-18 10:53:11 -0500 answered a question Question about Swift Storage Nodes CPU usage

Update:

After increasing run_pause to 900 from 30 in account_replicator, container_replicator and object_replicator sections in the configuration files the CPU usage has been decrease considerably. However increasing run_pause indicates that replication process stops for the given time. What can be drawback for increasing this value with respect reliability and what would you recommend for run_pause value?

Updated configuration:

account_server.conf

[DEFAULT] bind_ip = 0.0.0.0 workers = 8

[pipeline:main] pipeline = account-server

[app:account-server] use = egg:swift#account

[account-replicator] run_pause=900

[account-auditor]

[account-reaper]

container_server.conf

[DEFAULT] bind_ip = 0.0.0.0 workers = 8

[pipeline:main] pipeline = container-server

[app:container-server] use = egg:swift#container

[container-replicator] run_pause=900

[container-updater]

[container-auditor]

object_server.conf

[DEFAULT] bind_ip = 0.0.0.0 workers = 8

[pipeline:main] pipeline = object-server

[app:object-server] use = egg:swift#object

[object-replicator] run_pause=1500 ring_check_interval=900

[object-updater]

[object-auditor]

2012-10-02 13:32:46 -0500 asked a question Question about Swift Storage Nodes CPU usage

Hi all,

For 3 months we have been using Openstack swift in our test environment and while monitoring the CPU usage and I/O traffic on the storage nodes we realized that even at night in which no one makes a request CPU usage is like 40 to 50% on the servers and I/O throughput is higher and expected.

Test environment:

1 node (swift-proxy + keystone) 5 storage nodes (account,container and object servers)

Server specifications:

Virtual machine VMWare OS: Ubuntu 10.04 64 bit LTS CPU: 4 cores RAM: 6 GB

On the storage nodes for each module (account,container and object) auditor,replicator,updater and server processes are running.

Configuration files:

account-server.conf

[DEFAULT] bind_ip = 10.1.1.152 workers = 2 log_facility = LOG_LOCAL1

[pipeline:main] pipeline = account-server

[app:account-server] use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]

container-server.conf

[DEFAULT] bind_ip = 10.1.1.152 workers = 2 log_facility = LOG_LOCAL2

[pipeline:main] pipeline = container-server

[app:container-server] use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

object-server.conf

[DEFAULT] bind_ip = 10.1.1.152 workers = 2 log_facility = LOG_LOCAL3

[pipeline:main] pipeline = object-server

[app:object-server] use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]

rsyncd.conf

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /var/run/rsyncd.pid

address = 10.1.1.152

[account]

max connections = 2

path = /srv/node/

read only = false

lock file = /var/lock/account.lock

[container]

max connections = 2

path = /srv/node/

read only = false

lock file = /var/lock/container.lock

[object]

max connections = 2

path = /srv/node/

read only = false

lock file = /var/lock/object.lock

Since we have not run a perfomance test yet i think that the amount of resouce usage is too much and will cause problems during our tests. From the logs i can only understand that replicator is always run for a short periods of times. What can you recommend for improving our environment?

Thanks

2012-05-23 13:23:40 -0500 answered a question Object level ACL for Swift

Thanks Chmouel Boudjnah, that solved my question.

2012-05-23 06:13:58 -0500 asked a question Object level ACL for Swift

Hi,

Currently swift support container level ACL with filling X-Container-Read and X-Container-Write meta data with defined roles. Does swift support object level ACL, in which i can define define a specific ACL only for one object in the container instead of defining for the whole container?

2012-04-13 07:39:08 -0500 answered a question Keystone Role documentation

Thanks Joseph Heck, that solved my question.

2012-03-21 19:53:31 -0500 answered a question Load Test to OpenStack Swift

Thanks Chuck Thier, that solved my question.

2012-03-21 06:46:36 -0500 answered a question Load Test to OpenStack Swift

Hi,

Thank you for the answer. swift-bench is useful for a single user test. But we also want to run a multi user load test with multiple users tryin to access openstack objects simultaneously. Can swift-bench provide such functionality or do you have any recommendation about multi user simulation ?

2012-03-16 06:36:11 -0500 asked a question Load Test to OpenStack Swift

Hi ,

I want to run a load test against a swift cluster with 5 storage nodes and 1 proxy node. I have been trying to write tsung scripts for that but since i am a newbie to tsung i could not manage to simulate http put requests. Do you have any samples (does not have to be tsung ) on load test to openstack swift?

2012-02-14 14:36:15 -0500 asked a question Keystone Role documentation

Hi,

Where can I find any documentation regarding Keystone > Roles v2.0 APIs?

Thanks,

Maty.

2012-02-14 08:13:45 -0500 asked a question Determining subtrees for Keystone LDAP integration

Hi,

I am trying to use our existing user database for keystone. Since the schemas are not the same, i tried to levarage from LDAP by matching a predefined keystone schema with existing database schema using back-sql. So for i am successful at integration keystone with OpenLDAP server (ver 2.4.23) using a MySQL database as backend. However during investigating the code for keystone ldap integration i realize that on the ldap side two sub trees ou=Groups,dc=example,dc=com and ou=User,dc=example,dc=com must be defined. However i want to keystone to look for sub trees under the domain that i defined myself. I know that this a configuration issue in the keystone.conf for the ldap backend part. Can you show me a sample configuration which uses values for LDAP dn s defined by the user?

Thanks

2012-02-07 09:27:31 -0500 answered a question Swift and Keystone Integration problems

I am also facing the same problem and here is the error log while starting swift-proxy server:

File "/usr/bin/swift-proxy-server", line 22, in <module> run_wsgi(conf_file, 'proxy-server', default_port=8080, *options) File "/usr/lib/pymodules/python2.6/swift/common/wsgi.py", line 123, in run_wsgi loadapp('config:%s' % conf_file, global_conf={'log_name': log_name}) File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 204, in loadapp return loadobj(APP, uri, name=name, *kw) File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 225, in loadobj return context.create() File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 625, in create return self.object_type.invoke(self) File "/usr/lib/pymodules/python2.6/paste/deploy/loadwsgi.py", line 168, in invoke app = filter(app) File "/usr/local/lib/python2.6/dist-packages/keystone-2012.1-py2.6.egg/keystone/middleware/auth_token.py", line 661, in auth_filter return AuthProtocol(filteredapp, conf) File "/usr/local/lib/python2.6/dist-packages/keystone-2012.1-py2.6.egg/keystone/middleware/auth_token.py", line 244, in __init__ self._init_protocol_common(app, conf) # Applies to all protocols File "/usr/local/lib/python2.6/dist-packages/keystone-2012.1-py2.6.egg/keystone/middleware/auth_token.py", line 148, in _init_protocol_common logger.info("Starting the %s component", PROTOCOL_NAME) File "/usr/lib/python2.6/logging/__init__.py", line 1048, in info self._log(INFO, msg, args, **kwargs) File "/usr/lib/python2.6/logging/__init__.py", line 1165, in _log self.handle(record) File "/usr/lib/python2.6/logging/__init__.py", line 1175, in handle self.callHandlers(record) File "/usr/lib/python2.6/logging/__init__.py", line 1212, in callHandlers hdlr.handle(record) File "/usr/lib/python2.6/logging/__init__.py", line 673, in handle self.emit(record) File "/usr/lib/python2.6/logging/handlers.py", line 771, in emit msg = self.format(record) File "/usr/lib/python2.6/logging/__init__.py", line 648, in format return fmt.format(record) File "/usr/lib/pymodules/python2.6/swift/common/utils.py", line 391, in format msg = logging.Formatter.format(self, record) File "/usr/lib/python2.6/logging/__init__.py", line 439, in format s = self._fmt % record.__dict__ KeyError: 'server'