Ask Your Question
0

Swift-Veeam Cloud Backup : Request Timeout

asked 2013-06-12 08:19:25 -0600

Bob51 gravatar image

Hello everyone,

In first, sorry for my english ! :)

I called you because I have a problem with a backup plan on Veeam Backup Cloud with Object Storage (Swift) OpenStack.

Indeed, the launch of the plan Veeam Backup Cloud Backup with a 25 GB file, it tells me "Disk Space insufisant". Then, with a 120MB file for example, it performs the backup plan and one, the following message appears: "Request Timeout: The server has Waited too long for the request to be send by the client."

So I set some variables regarding to 40GB, namely:

vim /etc/swift/swift.conf

max_file_size = 42949672960

vim /etc/swift/proxy-server.conf

[app:proxy-server]
object_chunk_size = 42949672960
client_chunk_size = 42949672960

I think he has a problem with many variables but which side OpenStack Swift? I join my installation and configuration files side Swift and Swift Proxy.

Can you clarify me on this problem, please? Thank you in advance.

Matthew



7 - Proxy Swift Installation (192.168.220.71)


apt-get install swift openssh-server rsync memcached python-netifaces python-xattr python-memcache

mkdir -p /etc/swift
chown -R swift:swift /etc/swift/

Copy of swift.conf file of storage server
scp test@192.168.220.62:/etc/swift/swift.conf /etc/swift/

apt-get install swift-proxy memcached python-keystoneclient python-swiftclient python-webob

vim /etc/memcached.conf

-l 192.168.220.71

service memcached restart

vim /etc/swift/proxy-server.conf

[DEFAULT]
bind_port = 8080
user = swift

[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
object_chunk_size = 42949672960
client_chunk_size = 42949672960

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true

# cache directory for signing certificate
signing_dir = /home/swift/keystone-signing

# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = 192.168.220.70
auth_port = 35357

# the same admin_token as provided in keystone.conf
admin_token = test2013

# the service tenant and swift userid and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = test2013

[filter:cache]
use = egg:swift#memcache

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck


mkdir -p /home/swift/keystone-signing
chown -R swift:swift /home/swift/keystone-signing

cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1

vim exportring

# set the zone number for that storage device
export ZONE=1
# relative weight (higher for bigger/faster disks)
export WEIGHT=100
# Device stockage
export DEVICE=sdb1

source exportring

swift-ring-builder account.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6002/$DEVICE $WEIGHT
swift-ring-builder container.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6001/$DEVICE $WEIGHT
swift-ring-builder object.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6000/$DEVICE $WEIGHT

swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

Copy the rings on the storage server

chown -R swift:swift /etc/swift

swift-init proxy start


6 - Swift Installation (192.168.220.62)


apt-get install swift openssh-server rsync memcached python-netifaces ...

(more)
edit retag flag offensive close merge delete

Comments

UP ! Please ! :) The problem is the result of OpenStack.

Bob51 gravatar imageBob51 ( 2013-06-17 04:06:42 -0600 )edit

1 answer

Sort by » oldest newest most voted
0

answered 2013-06-13 04:43:28 -0600

Bob51 gravatar image

updated 2013-06-17 04:05:58 -0600

UP ! Please ! :) The problem is the result of OpenStack.


A member of the community Veeam left a message at my problem:

I'm not sure about the second error message, but the disk space inefficient message is correct if you are trying to upload large chunks without using the "Advanced" option. When using "Advanced" we automatically break large files into smaller "chunks" that can be uploaded (default is 10MB, can be set under options). With "Simple" mode we attempt upload the files exactly as they exist on the local disk, so you hit chunk size limits with large files.

Veeam support response:

Hello,
 
Thank you for your reply.
 
Here is the information that I have seen in the log:
"
2013-06-12 14:26:23,700 [PL] [1] INFO - Loading info for share drive cbb_configuration
2013-06-12 14:26:23,989 [Base] [14] WARN - memoryManager: Memory allocation limit is used. Available: 314572800, Need: 25894115115
2013-06-12 14:26:23,992 [Base] [14] WARN - Allocating 25,894,115,115 bytes store on disk.
2013-06-12 14:26:24,004 [Base] [14] ERROR - memoryManager: Failed to allocate file on disk. Available: 0, Need: 25894115115. Error: Not enough space on the disk.
"
It will define the cause of occurrence of problem. I do not think the problem came from the side of Veeam.
OpenStack is probably that returns the value of the space incorrectly.
 
Have you ever contacted OpenStack? What is the answer?
 
I am always in your disposal.
 
Waiting for your answer,
Sincerely,
Veeam Software.

-------------------------------------

Just for information, when I put this command in Proxy server:

root@srv-os-swift-proxy:~# swift-recon -d -v

===============================================================================
--> Starting reconnaissance on 1 hosts
===============================================================================
[2013-06-13 11:03:44] Checking disk usage now
-> http://192.168.220.62:6000/recon/diskusage: [{'device': 'sdb1', 'avail': 1606
Distribution Graph:
  0%    1 *********************************************************************
Disk usage: space used: 358199296 of 160981585920
Disk usage: space free: 160623386624 of 160981585920
Disk usage: lowest: 0.22%, highest: 0.22%, avg: 0.222509483897%
===============================================================================

root@srv-os-swift-proxy:~# swift -V 2.0 -A http://192.168.220.70:5000/v2.0 -U

demo:admin -K $ADMINPASS stat
Account: AUTH_57XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Containers: 1
   Objects: 6
     Bytes: 720243
Accept-Ranges: bytes
X-Timestamp: 1371030931.41193
Content-Type: text/plain; charset=utf-8

root@srv-os-swift-proxy:~# curl -k -v -H 'X-Storage-User: demo:admin' -H 'X-Storage-Pass: $ADMINPASS' http://192.168.220.70:5000/auth/v2.0

* About to connect() to 192.168.220.70 port 5000 (#0)
*   Trying 192.168.220.70... connected
> GET /auth/v2.0 HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 192.168.220.70:5000
> Accept: */*
> X-Storage-User: demo:admin
> X-Storage-Pass: $ADMINPASS
>
< HTTP/1.1 404 Not Found
< Vary: X-Auth-Token
< Content-Type: application/json
< Content-Length: 93
< Date: Thu, 13 Jun 2013 09:37:02 GMT
<
* Connection #0 to host 192.168.220.70 left intact
* Closing connection #0
{"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}

root@srv-os-swift-proxy:~# curl -k -v -X 'POST' http://192.168.220.70:5000 ...

(more)
edit flag offensive delete link more

Comments

Hey were you able to find FIX for this?

koolhead17 gravatar imagekoolhead17 ( 2014-01-31 14:36:44 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2013-06-12 08:19:25 -0600

Seen: 1,529 times

Last updated: Jun 17 '13