1
2
3s3cmd(1) General Commands Manual s3cmd(1)
4
5
6
8 s3cmd - tool for managing Amazon S3 storage space and Amazon CloudFront
9 content delivery network
10
12 s3cmd [OPTIONS] COMMAND [PARAMETERS]
13
15 s3cmd is a command line client for copying files to/from Amazon S3
16 (Simple Storage Service) and performing other related tasks, for
17 instance creating and removing buckets, listing objects, etc.
18
19
21 s3cmd can do several actions specified by the following commands.
22
23 s3cmd mb s3://BUCKET
24 Make bucket
25
26 s3cmd rb s3://BUCKET
27 Remove bucket
28
29 s3cmd ls [s3://BUCKET[/PREFIX]]
30 List objects or buckets
31
32 s3cmd la
33 List all object in all buckets
34
35 s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
36 Put file into bucket
37
38 s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
39 Get file from bucket
40
41 s3cmd del s3://BUCKET/OBJECT
42 Delete file from bucket
43
44 s3cmd rm s3://BUCKET/OBJECT
45 Delete file from bucket (alias for del)
46
47 s3cmd restore s3://BUCKET/OBJECT
48 Restore file from Glacier storage
49
50 s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX]
51 LOCAL_DIR
52 Synchronize a directory tree to S3 (checks files freshness using
53 size and md5 checksum, unless overridden by options, see below)
54
55 s3cmd du [s3://BUCKET[/PREFIX]]
56 Disk usage by buckets
57
58 s3cmd info s3://BUCKET[/OBJECT]
59 Get various information about Buckets or Files
60
61 s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
62 Copy object
63
64 s3cmd modify s3://BUCKET1/OBJECT
65 Modify object metadata
66
67 s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
68 Move object
69
70 s3cmd setacl s3://BUCKET[/OBJECT]
71 Modify Access control list for Bucket or Files
72
73 s3cmd setpolicy FILE s3://BUCKET
74 Modify Bucket Policy
75
76 s3cmd delpolicy s3://BUCKET
77 Delete Bucket Policy
78
79 s3cmd setcors FILE s3://BUCKET
80 Modify Bucket CORS
81
82 s3cmd delcors s3://BUCKET
83 Delete Bucket CORS
84
85 s3cmd payer s3://BUCKET
86 Modify Bucket Requester Pays policy
87
88 s3cmd multipart s3://BUCKET [Id]
89 Show multipart uploads
90
91 s3cmd abortmp s3://BUCKET/OBJECT Id
92 Abort a multipart upload
93
94 s3cmd listmp s3://BUCKET/OBJECT Id
95 List parts of a multipart upload
96
97 s3cmd accesslog s3://BUCKET
98 Enable/disable bucket access logging
99
100 s3cmd sign STRING-TO-SIGN
101 Sign arbitrary string using the secret key
102
103 s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
104 Sign an S3 URL to provide limited public access with expiry
105
106 s3cmd fixbucket s3://BUCKET[/PREFIX]
107 Fix invalid file names in a bucket
108
109 s3cmd expire s3://BUCKET
110 Set or delete expiration rule for the bucket
111
112 s3cmd setlifecycle FILE s3://BUCKET
113 Upload a lifecycle policy for the bucket
114
115 s3cmd getlifecycle s3://BUCKET
116 Get a lifecycle policy for the bucket
117
118 s3cmd dellifecycle s3://BUCKET
119 Remove a lifecycle policy for the bucket
120
121
122
123 Commands for static WebSites configuration
124
125 s3cmd ws-create s3://BUCKET
126 Create Website from bucket
127
128 s3cmd ws-delete s3://BUCKET
129 Delete Website
130
131 s3cmd ws-info s3://BUCKET
132 Info about Website
133
134
135
136 Commands for CloudFront management
137
138 s3cmd cflist
139 List CloudFront distribution points
140
141 s3cmd cfinfo [cf://DIST_ID]
142 Display CloudFront distribution point parameters
143
144 s3cmd cfcreate s3://BUCKET
145 Create CloudFront distribution point
146
147 s3cmd cfdelete cf://DIST_ID
148 Delete CloudFront distribution point
149
150 s3cmd cfmodify cf://DIST_ID
151 Change CloudFront distribution point parameters
152
153 s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]
154 Display CloudFront invalidation request(s) status
155
156
157
159 Some of the below specified options can have their default values set
160 in s3cmd config file (by default $HOME/.s3cmd). As it's a simple text
161 file feel free to open it with your favorite text editor and do any
162 changes you like.
163
164 -h, --help
165 show this help message and exit
166
167 --configure
168 Invoke interactive (re)configuration tool. Optionally use as
169 '--configure s3://some-bucket' to test access to a specific
170 bucket instead of attempting to list them all.
171
172 -c FILE, --config=FILE
173 Config file name. Defaults to $HOME/.s3cfg
174
175 --dump-config
176 Dump current configuration after parsing config files and com‐
177 mand line options and exit.
178
179 --access_key=ACCESS_KEY
180 AWS Access Key
181
182 --secret_key=SECRET_KEY
183 AWS Secret Key
184
185 --access_token=ACCESS_TOKEN
186 AWS Access Token
187
188 -n, --dry-run
189 Only show what should be uploaded or downloaded but don't actu‐
190 ally do it. May still perform S3 requests to get bucket listings
191 and other information though (only for file transfer commands)
192
193 -s, --ssl
194 Use HTTPS connection when communicating with S3. (default)
195
196 --no-ssl
197 Don't use HTTPS.
198
199 -e, --encrypt
200 Encrypt files before uploading to S3.
201
202 --no-encrypt
203 Don't encrypt files.
204
205 -f, --force
206 Force overwrite and other dangerous operations.
207
208 --continue
209 Continue getting a partially downloaded file (only for [get]
210 command).
211
212 --continue-put
213 Continue uploading partially uploaded files or multipart upload
214 parts. Restarts parts/files that don't have matching size and
215 md5. Skips files/parts that do. Note: md5sum checks are not
216 always sufficient to check (part) file equality. Enable this at
217 your own risk.
218
219 --upload-id=UPLOAD_ID
220 UploadId for Multipart Upload, in case you want continue an
221 existing upload (equivalent to --continue- put) and there are
222 multiple partial uploads. Use s3cmd multipart [URI] to see what
223 UploadIds are associated with the given URI.
224
225 --skip-existing
226 Skip over files that exist at the destination (only for [get]
227 and [sync] commands).
228
229 -r, --recursive
230 Recursive upload, download or removal.
231
232 --check-md5
233 Check MD5 sums when comparing files for [sync]. (default)
234
235 --no-check-md5
236 Do not check MD5 sums when comparing files for [sync]. Only
237 size will be compared. May significantly speed up transfer but
238 may also miss some changed files.
239
240 -P, --acl-public
241 Store objects with ACL allowing read for anyone.
242
243 --acl-private
244 Store objects with default ACL allowing access for you only.
245
246 --acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
247 Grant stated permission to a given amazon user. Permission is
248 one of: read, write, read_acp, write_acp, full_control, all
249
250 --acl-revoke=PERMISSION:USER_CANONICAL_ID
251 Revoke stated permission for a given amazon user. Permission is
252 one of: read, write, read_acp, write_acp, full_control, all
253
254 -D NUM, --restore-days=NUM
255 Number of days to keep restored file available (only for
256 'restore' command). Default is 1 day.
257
258 --restore-priority=RESTORE_PRIORITY
259 Priority for restoring files from S3 Glacier (only for expedited
260
261 --delete-removed
262 Delete destination objects with no corresponding source file
263 [sync]
264
265 --no-delete-removed
266 Don't delete destination objects.
267
268 --delete-after
269 Perform deletes AFTER new uploads when delete-removed is enabled
270 [sync]
271
272 --delay-updates
273 *OBSOLETE* Put all updated files into place at end [sync]
274
275 --max-delete=NUM
276 Do not delete more than NUM files. [del] and [sync]
277
278 --limit=NUM
279 Limit number of objects returned in the response body (only for
280 [ls] and [la] commands)
281
282 --add-destination=ADDITIONAL_DESTINATIONS
283 Additional destination for parallel uploads, in addition to last
284 arg. May be repeated.
285
286 --delete-after-fetch
287 Delete remote objects after fetching to local file (only for
288 [get] and [sync] commands).
289
290 -p, --preserve
291 Preserve filesystem attributes (mode, ownership, timestamps).
292 Default for [sync] command.
293
294 --no-preserve
295 Don't store FS attributes
296
297 --exclude=GLOB
298 Filenames and paths matching GLOB will be excluded from sync
299
300 --exclude-from=FILE
301 Read --exclude GLOBs from FILE
302
303 --rexclude=REGEXP
304 Filenames and paths matching REGEXP (regular expression) will be
305 excluded from sync
306
307 --rexclude-from=FILE
308 Read --rexclude REGEXPs from FILE
309
310 --include=GLOB
311 Filenames and paths matching GLOB will be included even if pre‐
312 viously excluded by one of --(r)exclude(-from) patterns
313
314 --include-from=FILE
315 Read --include GLOBs from FILE
316
317 --rinclude=REGEXP
318 Same as --include but uses REGEXP (regular expression) instead
319 of GLOB
320
321 --rinclude-from=FILE
322 Read --rinclude REGEXPs from FILE
323
324 --files-from=FILE
325 Read list of source-file names from FILE. Use - to read from
326 stdin.
327
328 --region=REGION, --bucket-location=REGION
329 Region to create bucket in. As of now the regions are:
330 us-east-1, us-west-1, us-west-2, eu-west-1, eu- central-1,
331 ap-northeast-1, ap-southeast-1, ap- southeast-2, sa-east-1
332
333 --host=HOSTNAME
334 HOSTNAME:PORT for S3 endpoint (default: s3.amazonaws.com, alter‐
335 natives such as s3-eu- west-1.amazonaws.com). You should also
336 set --host- bucket.
337
338 --host-bucket=HOST_BUCKET
339 DNS-style bucket+hostname:port template for accessing a bucket
340 (default: %(bucket)s.s3.amazonaws.com)
341
342 --reduced-redundancy, --rr
343 Store object with 'Reduced redundancy'. Lower per-GB price.
344 [put, cp, mv]
345
346 --no-reduced-redundancy, --no-rr
347 Store object without 'Reduced redundancy'. Higher per- GB price.
348 [put, cp, mv]
349
350 --storage-class=CLASS
351 Store object with specified CLASS (STANDARD, STANDARD_IA, ONE‐
352 ZONE_IA, INTELLIGENT_TIERING, GLACIER or DEEP_ARCHIVE). [put,
353 cp, mv]
354
355 --access-logging-target-prefix=LOG_TARGET_PREFIX
356 Target prefix for access logs (S3 URI) (for [cfmodify] and
357 [accesslog] commands)
358
359 --no-access-logging
360 Disable access logging (for [cfmodify] and [accesslog] commands)
361
362 --default-mime-type=DEFAULT_MIME_TYPE
363 Default MIME-type for stored objects. Application default is
364 binary/octet-stream.
365
366 -M, --guess-mime-type
367 Guess MIME-type of files by their extension or mime magic. Fall
368 back to default MIME-Type as specified by --default-mime-type
369 option
370
371 --no-guess-mime-type
372 Don't guess MIME-type and use the default type instead.
373
374 --no-mime-magic
375 Don't use mime magic when guessing MIME-type.
376
377 -m MIME/TYPE, --mime-type=MIME/TYPE
378 Force MIME-type. Override both --default-mime-type and
379 --guess-mime-type.
380
381 --add-header=NAME:VALUE
382 Add a given HTTP header to the upload request. Can be used mul‐
383 tiple times. For instance set 'Expires' or 'Cache-Control' head‐
384 ers (or both) using this option.
385
386 --remove-header=NAME
387 Remove a given HTTP header. Can be used multiple times. For
388 instance, remove 'Expires' or 'Cache- Control' headers (or both)
389 using this option. [modify]
390
391 --server-side-encryption
392 Specifies that server-side encryption will be used when putting
393 objects. [put, sync, cp, modify]
394
395 --server-side-encryption-kms-id=KMS_KEY
396 Specifies the key id used for server-side encryption with AWS
397 KMS-Managed Keys (SSE-KMS) when putting objects. [put, sync, cp,
398 modify]
399
400 --encoding=ENCODING
401 Override autodetected terminal and filesystem encoding (charac‐
402 ter set). Autodetected: UTF-8
403
404 --add-encoding-exts=EXTENSIONs
405 Add encoding to these comma delimited extensions i.e.
406 (css,js,html) when uploading to S3 )
407
408 --verbatim
409 Use the S3 name as given on the command line. No pre- process‐
410 ing, encoding, etc. Use with caution!
411
412 --disable-multipart
413 Disable multipart upload on files bigger than --multi‐
414 part-chunk-size-mb
415
416 --multipart-chunk-size-mb=SIZE
417 Size of each chunk of a multipart upload. Files bigger than SIZE
418 are automatically uploaded as multithreaded- multipart, smaller
419 files are uploaded using the traditional method. SIZE is in
420 Mega-Bytes, default chunk size is 15MB, minimum allowed chunk
421 size is 5MB, maximum is 5GB.
422
423 --list-md5
424 Include MD5 sums in bucket listings (only for 'ls' command).
425
426 -H, --human-readable-sizes
427 Print sizes in human readable form (eg 1kB instead of 1234).
428
429 --ws-index=WEBSITE_INDEX
430 Name of index-document (only for [ws-create] command)
431
432 --ws-error=WEBSITE_ERROR
433 Name of error-document (only for [ws-create] command)
434
435 --expiry-date=EXPIRY_DATE
436 Indicates when the expiration rule takes effect. (only for
437 [expire] command)
438
439 --expiry-days=EXPIRY_DAYS
440 Indicates the number of days after object creation the expira‐
441 tion rule takes effect. (only for [expire] command)
442
443 --expiry-prefix=EXPIRY_PREFIX
444 Identifying one or more objects with the prefix to which the
445 expiration rule applies. (only for [expire] command)
446
447 --progress
448 Display progress meter (default on TTY).
449
450 --no-progress
451 Don't display progress meter (default on non-TTY).
452
453 --stats
454 Give some file-transfer stats.
455
456 --enable
457 Enable given CloudFront distribution (only for [cfmodify] com‐
458 mand)
459
460 --disable
461 Disable given CloudFront distribution (only for [cfmodify] com‐
462 mand)
463
464 --cf-invalidate
465 Invalidate the uploaded filed in CloudFront. Also see [cfinval]
466 command.
467
468 --cf-invalidate-default-index
469 When using Custom Origin and S3 static website, invalidate the
470 default index file.
471
472 --cf-no-invalidate-default-index-root
473 When using Custom Origin and S3 static website, don't invalidate
474 the path to the default index file.
475
476 --cf-add-cname=CNAME
477 Add given CNAME to a CloudFront distribution (only for [cfcre‐
478 ate] and [cfmodify] commands)
479
480 --cf-remove-cname=CNAME
481 Remove given CNAME from a CloudFront distribution (only for
482 [cfmodify] command)
483
484 --cf-comment=COMMENT
485 Set COMMENT for a given CloudFront distribution (only for
486 [cfcreate] and [cfmodify] commands)
487
488 --cf-default-root-object=DEFAULT_ROOT_OBJECT
489 Set the default root object to return when no object is speci‐
490 fied in the URL. Use a relative path, i.e. default/index.html
491 instead of /default/index.html or s3://bucket/default/index.html
492 (only for [cfcreate] and [cfmodify] commands)
493
494 -v, --verbose
495 Enable verbose output.
496
497 -d, --debug
498 Enable debug output.
499
500 --version
501 Show s3cmd version (2.1.0) and exit.
502
503 -F, --follow-symlinks
504 Follow symbolic links as if they are regular files
505
506 --cache-file=FILE
507 Cache FILE containing local source MD5 values
508
509 -q, --quiet
510 Silence output on stdout
511
512 --ca-certs=CA_CERTS_FILE
513 Path to SSL CA certificate FILE (instead of system default)
514
515 --check-certificate
516 Check SSL certificate validity
517
518 --no-check-certificate
519 Do not check SSL certificate validity
520
521 --check-hostname
522 Check SSL certificate hostname validity
523
524 --no-check-hostname
525 Do not check SSL certificate hostname validity
526
527 --signature-v2
528 Use AWS Signature version 2 instead of newer signature methods.
529 Helpful for S3-like systems that don't have AWS Signature v4
530 yet.
531
532 --limit-rate=LIMITRATE
533 Limit the upload or download speed to amount bytes per second.
534 Amount may be expressed in bytes, kilobytes with the k suffix,
535 or megabytes with the m suffix
536
537 --no-connection-pooling
538 Disable connection re-use
539
540 --requester-pays
541 Set the REQUESTER PAYS flag for operations
542
543 -l, --long-listing
544 Produce long listing [ls]
545
546 --stop-on-error
547 stop if error in transfer
548
549 --content-disposition=CONTENT_DISPOSITION
550 Provide a Content-Disposition for signed URLs, e.g., "inline;
551 filename=myvideo.mp4"
552
553 --content-type=CONTENT_TYPE
554 Provide a Content-Type for signed URLs, e.g., "video/mp4"
555
556
557
559 One of the most powerful commands of s3cmd is s3cmd sync used for syn‐
560 chronising complete directory trees to or from remote S3 storage. To
561 some extent s3cmd put and s3cmd get share a similar behaviour with
562 sync.
563
564 Basic usage common in backup scenarios is as simple as:
565 s3cmd sync /local/path/ s3://test-bucket/backup/
566
567 This command will find all files under /local/path directory and copy
568 them to corresponding paths under s3://test-bucket/backup on the remote
569 side. For example:
570 /local/path/file1.ext -> s3://bucket/backup/file1.ext
571 /local/path/dir123/file2.bin -> s3://bucket/backup/dir123/file2.bin
572
573 However if the local path doesn't end with a slash the last directory's
574 name is used on the remote side as well. Compare these with the previ‐
575 ous example:
576 s3cmd sync /local/path s3://test-bucket/backup/
577 will sync:
578 /local/path/file1.ext -> s3://bucket/backup/path/file1.ext
579 /local/path/dir123/file2.bin -> s3://bucket/backup/path/dir123/file2.bin
580
581 To retrieve the files back from S3 use inverted syntax:
582 s3cmd sync s3://test-bucket/backup/ ~/restore/
583 that will download files:
584 s3://bucket/backup/file1.ext -> ~/restore/file1.ext
585 s3://bucket/backup/dir123/file2.bin -> ~/restore/dir123/file2.bin
586
587 Without the trailing slash on source the behaviour is similar to what
588 has been demonstrated with upload:
589 s3cmd sync s3://test-bucket/backup ~/restore/
590 will download the files as:
591 s3://bucket/backup/file1.ext -> ~/restore/backup/file1.ext
592 s3://bucket/backup/dir123/file2.bin -> ~/restore/backup/dir123/file2.bin
593
594 All source file names, the bold ones above, are matched against exclude
595 rules and those that match are then re-checked against include rules to
596 see whether they should be excluded or kept in the source list.
597
598 For the purpose of --exclude and --include matching only the bold file
599 names above are used. For instance only path/file1.ext is tested
600 against the patterns, not /local/path/file1.ext
601
602 Both --exclude and --include work with shell-style wildcards (a.k.a.
603 GLOB). For a greater flexibility s3cmd provides Regular-expression
604 versions of the two exclude options named --rexclude and --rinclude.
605 The options with ...-from suffix (eg --rinclude-from) expect a filename
606 as an argument. Each line of such a file is treated as one pattern.
607
608 There is only one set of patterns built from all --(r)exclude(-from)
609 options and similarly for include variant. Any file excluded with eg
610 --exclude can be put back with a pattern found in --rinclude-from list.
611
612 Run s3cmd with --dry-run to verify that your rules work as expected.
613 Use together with --debug get detailed information about matching file
614 names against exclude and include rules.
615
616 For example to exclude all files with ".jpg" extension except those
617 beginning with a number use:
618
619 --exclude '*.jpg' --rinclude '[0-9].*.jpg'
620
621 To exclude all files except "*.jpg" extension, use:
622
623 --exclude '*' --include '*.jpg'
624
625 To exclude local directory 'somedir', be sure to use a trailing forward
626 slash, as such:
627
628 --exclude 'somedir/'
629
631 For the most up to date list of options run: s3cmd --help
632 For more info about usage, examples and other related info visit
633 project homepage at: http://s3tools.org
634
636 Written by Michal Ludvig and contributors
637
639 Preferred way to get support is our mailing list:
640 s3tools-general@lists.sourceforge.net
641 or visit the project homepage:
642 http://s3tools.org
643
645 Report bugs to s3tools-bugs@lists.sourceforge.net
646
648 Copyright © 2007-2015 TGRMN Software - http://www.tgrmn.com - and con‐
649 tributors
650
652 This program is free software; you can redistribute it and/or modify it
653 under the terms of the GNU General Public License as published by the
654 Free Software Foundation; either version 2 of the License, or (at your
655 option) any later version. This program is distributed in the hope
656 that it will be useful, but WITHOUT ANY WARRANTY; without even the
657 implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PUR‐
658 POSE. See the GNU General Public License for more details.
659
660
661
662 s3cmd(1)