N
N
nekrolai2015-05-26 19:52:41
Computer networks
nekrolai, 2015-05-26 19:52:41

Low upload speed s3cmd, what could be the problem?

Dedicated server on Selectel. Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz. 8Gb. Debian 3.2.57-3 x86_64 GNU/Linux.
s3cmd uploads files of the order of 160kb/sec very slowly. Of course, there is no difference between sync and put (but I wanted to).
here is the .s3cfg

access_key = 
bucket_location = US
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delete_removed = False
dry_run = False
enable_multipart = True
encoding = ISO-8859-1
encrypt = False
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = 
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_on_cf = False
list_md5 = False
log_target_prefix = 
mime_type =
multipart_chunk_size_mb = 15 
preserve_attrs = True
progress_meter = True
proxy_host = 
proxy_port = 0
recursive = False
recv_chunk = 4096
reduced_redundancy = False
secret_key = 
send_chunk = 4096
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
urlencoding_mode = normal
use_https = False
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error = 
website_index = index.html

Here is the output when s3cmd put ... --debug
DEBUG: ConfigParser: Reading file '/root/.s3cfg'
DEBUG: ConfigParser: access_key->
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: enable_multipart->True
DEBUG: ConfigParser: encoding->ISO-8859-1
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->s3.amazonaws.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.amazonaws.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: invalidate_on_cf->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: mime_type->
DEBUG: ConfigParser: multipart_chunk_size_mb->15 
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->
DEBUG: ConfigParser: proxy_port->0
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->300
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: ConfigParser: website_endpoint->http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
DEBUG: ConfigParser: website_error->
DEBUG: ConfigParser: website_index->index.html
DEBUG: Updating Config.Config cache_file -> 
DEBUG: Updating Config.Config encoding -> ISO-8859-1
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'put' using ISO-8859-1
DEBUG: Unicodising '/backup/local/sys/system-20150520.tar.gz' using ISO-8859-1
DEBUG: Unicodising 's3://%name%' using ISO-8859-1
DEBUG: Command: put
INFO: Compiling list of local files...
DEBUG: DeUnicodising u'system-20150520.tar.gz' using ISO-8859-1
DEBUG: DeUnicodising u'/backup/local/sys' using ISO-8859-1
DEBUG: DeUnicodising u'system-20150520.tar.gz' using ISO-8859-1
DEBUG: Unicodising 'system-20150520.tar.gz' using ISO-8859-1
DEBUG: Unicodising '/backup/local/sys/system-20150520.tar.gz' using ISO-8859-1
DEBUG: Applying --exclude/--include
DEBUG: CHECK: system-20150520.tar.gz
DEBUG: PASS: u'system-20150520.tar.gz'
INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time...
DEBUG: doing file I/O to read md5 of system-20150520.tar.gz
INFO: Summary: 1 local files to upload
DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1432096570/atime:1432572175/md5:'}
DEBUG: String 'system-20150520.tar.gz' encoded to 'system-20150520.tar.gz'
DEBUG: SignHeaders: 'POST\n\napplication/x-gzip\n\nx-amz-date:Tue, 26 May 2015 16:31:09 +0000\nx-amz-meta-s3cmd-attrs:uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1432096570/atime:1432572175/md5:/ctime:1432096616\n/%NAME%/system-20150520.tar.gz?uploads'
DEBUG: CreateRequest: resource[uri]=/system-20150520.tar.gz?uploads
DEBUG: SignHeaders: 'POST\n\napplication/x-gzip\n\nx-amz-date:Tue, 26 May 2015 16:31:09 +0000\nx-amz-meta-s3cmd-attrs:uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1432096570/atime:1432572175/md5:\n/%NAME%/system-20150520.tar.gz?uploads'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(%NAME%): %name%.s3.amazonaws.com
DEBUG: ConnMan.get(): creating new connection: http://%NAME%.s3.amazonaws.com
DEBUG: format_uri(): /system-20150520.tar.gz?uploads
DEBUG: Sending request method_string='POST', uri='/system-20150520.tar.gz?uploads', headers={'content-length': '0', 'x-amz-meta-s3cmd-attrs': 'uid:0/gname:root/uname:root/gid:0/mode:33188/mtime:1432096570/atime:1432572175/md5:, 'content-type': 'application/x-gzip', 'Authorization': 'AWS :ud/EiQTuPV+PW6J5+hhXT3IGWQo=', 'x-amz-date': 'Tue, 26 May 2015 16:31:09 +0000'}, body=(0 bytes)
DEBUG: Response: {'status': 200, 'headers': {'x-amz-id-2': 'mMHQmrfpm42lZpFbkI/rbEe+7/zVB2uP0kq4QYafF+yewsJa7p27ShOmOmPf4I5zHGpZR4Y1A0Q=', 'date': 'Tue, 26 May 2015 16:32:10 GMT', 'transfer-encoding': 'chunked', 'x-amz-request-id': 'E678816765DCC859', 'server': 'AmazonS3'}, 'reason': 'OK', 'data': '<?xml version="1.0" encoding="UTF-8"?>\n<InitiateMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Bucket>%NAME%</Bucket><Key>system-20150520.tar.gz</Key><UploadId></UploadId></InitiateMultipartUploadResult>'}
DEBUG: ConnMan.put(): connection put back to pool (http://%NAME%.s3.amazonaws.com#1)
DEBUG: MultiPart: Uploading /backup/local/sys/system-20150520.tar.gz in 38 parts
DEBUG: Unicodising '/backup/local/sys/system-20150520.tar.gz' using ISO-8859-1
DEBUG: Uploading part 1 of ' (15728640 bytes)
DEBUG: String 'system-20150520.tar.gz' encoded to 'system-20150520.tar.gz'
DEBUG: SignHeaders: 'PUT\n\n\n\nx-amz-date:Tue, 26 May 2015 16:31:10 +0000\n/%NAME%/system-20150520.tar.gz?partNumber=1&uploadId=
DEBUG: CreateRequest: resource[uri]=/system-20150520.tar.gz?partNumber=1&uploadId=
DEBUG: SignHeaders: 'PUT\n\n\n\nx-amz-date:Tue, 26 May 2015 16:31:10 +0000\n/%NAME%/system-20150520.tar.gz?partNumber=1&uploadId=
/backup/local/sys/system-20150520.tar.gz -> s3://%NAME%/system-20150520.tar.gz  [part 1 of 38, 15MB]
DEBUG: get_hostname(%NAME%): %NAME%.s3.amazonaws.com
DEBUG: ConnMan.get(): re-using connection: http://%NAME%.s3.amazonaws.com#1
DEBUG: format_uri(): /system-20150520.tar.gz?partNumber=1&uploadId=
     4096 of 15728640     0% in    0s    17.44 MB/s^CERROR: 
Upload of '/backup/local/sys/system-20150520.tar.gz' part 1 failed. Use
  /usr/bin/s3cmd abortmp s3://%NAME%/system-20150520.tar.gz 
to abort the upload, or
  /usr/bin/s3cmd --upload-id  put ...
to continue the upload.
See ya!

(All md5 sums, secretkeys and bucket names have been removed from the listings)
The settings are more than standard.
This is definitely not a channel limitation, it's definitely not a lack of resources. "Other guys" working on the selector did not complain about the upload speed. According to the customer, everything flew before.
I suspect that multipart works somehow incorrectly. Everywhere they write that it should be poured in parallel streams, but in reality each piece is poured in turn. Maybe I'm doing something wrong.
There is no way to install awscli, it pulls a lot of dependencies that will affect the functioning of the server.
Please set the search direction.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
N
nekrolai, 2015-05-29
@nekrolai

The reason is not clear, the solution was found here emind .
I increased the send_chunk parameter until I got reasonable values ​​for the upload speed.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question