kype's profile - activity

2015-11-12 08:17:52 -0600 received badge  Taxonomist
2015-03-29 05:51:50 -0600 received badge  Famous Question (source)
2014-10-07 15:23:32 -0600 received badge  Famous Question (source)
2014-09-09 04:30:56 -0600 received badge  Famous Question (source)
2014-07-10 03:00:12 -0600 received badge  Notable Question (source)
2014-06-26 16:47:46 -0600 received badge  Popular Question (source)
2014-06-20 15:02:06 -0600 received badge  Notable Question (source)
2014-06-15 21:22:51 -0600 received badge  Popular Question (source)
2014-06-13 16:59:03 -0600 received badge  Student (source)
2014-06-12 02:45:04 -0600 asked a question Is it safe to ignore the "Service Unavailable" errors when running Swift ssbench?

I have just done a fresh installation of RDO PackStack and am using SSBench to benchmark the Swift storage system. When i run this command to start the benchmarking process

ssbench-master run-scenario -f scenarios/very_small.scenario -u 4 -c 80 -o 613 --pctile 50 --workers 2

During the run many exceptions are raised:

WARNING:calculate_scenario_stats: exception from worker 1: ClientException('Object GET failed',)
INFO:Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/ssbench/worker.py", line 206, in handle_job
    handler(job_data)
  File "/usr/lib/python2.6/site-packages/ssbench/worker.py", line 444, in handle_get_object
    resp_chunk_size=object_info.get('block_size', DEFAULT_BLOCK_SIZE))
  File "/usr/lib/python2.6/site-packages/ssbench/worker.py", line 357, in ignoring_http_responses
    raise error
ClientException: Object GET failed: http://192.168.36.229:8080/v1/AUTH_a25f1a2f28a74907a06532147633c020/ssbench_000001/small_000001 503 Service Unavailable  [first 60 chars of response] <html><h1>Service Unavailable</h1><p>The server is currently

However SSBench is still able to get me the results of the benchmark

Small test scenario  (generated with ssbench version 0.2.23)
Worker count:   2   Concurrency:   4  Ran 2014-06-01 19:13:27 UTC to 2014-06-01 19:13:30 UTC (2s)

% Ops    C   R   U   D       Size Range       Size Name
 91%   % 10  75  15   0        4 kB -   8 kB  tiny
  9%   % 10  75  15   0       20 kB -  40 kB  small
---------------------------------------------------------------------
         10  75  15   0      CRUD weighted average

TOTAL
       Count:   101 (  512 error;  4718 retries: 4671.29%)  Average requests per second:   7.1
                            min       max      avg      std_dev  50%-ile                   Worst latency TX ID
       First-byte latency:  0.016 -   0.273    0.053  (  0.031)    0.047  (all obj sizes)  txe207b1a88a37420aa12b8-00538b7b5a
       Last-byte  latency:  0.016 -   0.274    0.053  (  0.031)    0.047  (all obj sizes)  txe207b1a88a37420aa12b8-00538b7b5a
       First-byte latency:  0.016 -   0.161    0.051  (  0.022)    0.047  (    tiny objs)  txd876e69941574f26acbac-00538b7b5a
       Last-byte  latency:  0.016 -   0.161    0.051  (  0.022)    0.047  (    tiny objs)  txd876e69941574f26acbac-00538b7b5a
       First-byte latency:  0.027 -   0.273    0.075  (  0.071)    0.059  (   small objs)  txe207b1a88a37420aa12b8-00538b7b5a
       Last-byte  latency:  0.028 -   0.274    0.076  (  0.071)    0.059  (   small objs)  txe207b1a88a37420aa12b8-00538b7b5a

READ
       Count:   101 (  350 error;  3509 retries: 3474.26%)  Average requests per second:   7.1
                            min       max      avg      std_dev  50%-ile                   Worst latency TX ID
       First-byte latency:  0.016 -   0.273    0.053  (  0.031)    0.047  (all obj sizes)  txe207b1a88a37420aa12b8-00538b7b5a
       Last-byte  latency:  0.016 -   0.274    0.053  (  0.031)    0.047  (all obj sizes)  txe207b1a88a37420aa12b8-00538b7b5a
       First-byte latency:  0.016 -   0.161    0.051  (  0.022)    0.047  (    tiny objs)  txd876e69941574f26acbac-00538b7b5a
       Last-byte  latency:  0.016 -   0.161    0.051  (  0.022)    0.047  (    tiny objs)  txd876e69941574f26acbac-00538b7b5a
       First-byte latency:  0.027 -   0.273    0.075  (  0.071)    0.059  (   small objs)  txe207b1a88a37420aa12b8-00538b7b5a
       Last-byte  latency:  0.028 -   0.274    0.076  (  0.071)    0.059  (   small objs)  txe207b1a88a37420aa12b8-00538b7b5a

So is it safe to ignore the Python exceptions raised above? Thanks in advance

My Setup:

PackStack RDO All-IN-ONE configuration (Icehouse release) running on CentOS 6.5

SSBench installed on same system

2014-06-07 08:19:49 -0600 received badge  Notable Question (source)
2014-06-06 01:36:15 -0600 received badge  Popular Question (source)
2014-06-05 23:06:38 -0600 received badge  Supporter (source)
2014-06-05 23:06:37 -0600 received badge  Scholar (source)
2014-06-05 23:06:34 -0600 commented answer Rebooting a machine running RDO PackStack

Thank you. I assumed it was like devStack and was searching around for a restart command. Thank you so much

2014-06-05 22:05:20 -0600 asked a question Rebooting a machine running RDO PackStack

I have installed RDO Packstack All-in-one configuration on my CentOS 6.5 node. If i reboot the machine, how do i restart my OpenStack cloud?

2014-06-04 04:05:07 -0600 received badge  Enthusiast
2014-06-01 01:08:45 -0600 asked a question ssbench install error (No distributions found for statlib)

Hi,

I have been trying to install ssbench to benchmark my OpenStack cloud running on my Ubuntu 14.04 server.

I used the guide at https://github.com/swiftstack/ssbench and tried to install ssbench with these commands

$ sudo apt-get install -y python-dev python-pip 'g++' libzmq-dev libevent-dev
$ sudo pip install --upgrade distribute
$ sudo pip install Cython gevent pyzmq==2.2.0
$ sudo pip install ssbench

However the ssbench installation (sudo pip install ssbench) fails with this error

Downloading/unpacking statlib (from ssbench)
  Could not find any downloads that satisfy the requirement statlib (from ssbench)
  Some externally hosted files were ignored (use --allow-external statlib to allow).
Cleaning up...
No distributions at all found for statlib (from ssbench)
Storing debug log for failure in /home/stack/.pip/pip.log

Is there any way to fix this?