1
2
3s3cmd(1) General Commands Manual s3cmd(1)
4
5
6
8 s3cmd - tool for managing Amazon S3 storage space and Amazon CloudFront
9 content delivery network
10
12 s3cmd [OPTIONS] COMMAND [PARAMETERS]
13
15 s3cmd is a command line client for copying files to/from Amazon S3
16 (Simple Storage Service) and performing other related tasks, for
17 instance creating and removing buckets, listing objects, etc.
18
19
21 s3cmd can do several actions specified by the following commands.
22
23 s3cmd mb s3://BUCKET
24 Make bucket
25
26 s3cmd rb s3://BUCKET
27 Remove bucket
28
29 s3cmd ls [s3://BUCKET[/PREFIX]]
30 List objects or buckets
31
32 s3cmd la
33 List all object in all buckets
34
35 s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
36 Put file into bucket
37
38 s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
39 Get file from bucket
40
41 s3cmd del s3://BUCKET/OBJECT
42 Delete file from bucket
43
44 s3cmd rm s3://BUCKET/OBJECT
45 Delete file from bucket (alias for del)
46
47 s3cmd restore s3://BUCKET/OBJECT
48 Restore file from Glacier storage
49
50 s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX]
51 LOCAL_DIR
52 Synchronize a directory tree to S3 (checks files freshness using
53 size and md5 checksum, unless overridden by options, see below)
54
55 s3cmd du [s3://BUCKET[/PREFIX]]
56 Disk usage by buckets
57
58 s3cmd info s3://BUCKET[/OBJECT]
59 Get various information about Buckets or Files
60
61 s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
62 Copy object
63
64 s3cmd modify s3://BUCKET1/OBJECT
65 Modify object metadata
66
67 s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
68 Move object
69
70 s3cmd setacl s3://BUCKET[/OBJECT]
71 Modify Access control list for Bucket or Files
72
73 s3cmd setpolicy FILE s3://BUCKET
74 Modify Bucket Policy
75
76 s3cmd delpolicy s3://BUCKET
77 Delete Bucket Policy
78
79 s3cmd setcors FILE s3://BUCKET
80 Modify Bucket CORS
81
82 s3cmd delcors s3://BUCKET
83 Delete Bucket CORS
84
85 s3cmd payer s3://BUCKET
86 Modify Bucket Requester Pays policy
87
88 s3cmd multipart s3://BUCKET [Id]
89 Show multipart uploads
90
91 s3cmd abortmp s3://BUCKET/OBJECT Id
92 Abort a multipart upload
93
94 s3cmd listmp s3://BUCKET/OBJECT Id
95 List parts of a multipart upload
96
97 s3cmd accesslog s3://BUCKET
98 Enable/disable bucket access logging
99
100 s3cmd sign STRING-TO-SIGN
101 Sign arbitrary string using the secret key
102
103 s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
104 Sign an S3 URL to provide limited public access with expiry
105
106 s3cmd fixbucket s3://BUCKET[/PREFIX]
107 Fix invalid file names in a bucket
108
109 s3cmd expire s3://BUCKET
110 Set or delete expiration rule for the bucket
111
112 s3cmd setlifecycle FILE s3://BUCKET
113 Upload a lifecycle policy for the bucket
114
115 s3cmd getlifecycle s3://BUCKET
116 Get a lifecycle policy for the bucket
117
118 s3cmd dellifecycle s3://BUCKET
119 Remove a lifecycle policy for the bucket
120
121
122
123 Commands for static WebSites configuration
124
125 s3cmd ws-create s3://BUCKET
126 Create Website from bucket
127
128 s3cmd ws-delete s3://BUCKET
129 Delete Website
130
131 s3cmd ws-info s3://BUCKET
132 Info about Website
133
134
135
136 Commands for CloudFront management
137
138 s3cmd cflist
139 List CloudFront distribution points
140
141 s3cmd cfinfo [cf://DIST_ID]
142 Display CloudFront distribution point parameters
143
144 s3cmd cfcreate s3://BUCKET
145 Create CloudFront distribution point
146
147 s3cmd cfdelete cf://DIST_ID
148 Delete CloudFront distribution point
149
150 s3cmd cfmodify cf://DIST_ID
151 Change CloudFront distribution point parameters
152
153 s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]
154 Display CloudFront invalidation request(s) status
155
156
157
159 Some of the below specified options can have their default values set
160 in s3cmd config file (by default $HOME/.s3cmd). As it's a simple text
161 file feel free to open it with your favorite text editor and do any
162 changes you like.
163
164 -h, --help
165 show this help message and exit
166
167 --configure
168 Invoke interactive (re)configuration tool. Optionally use as
169 '--configure s3://some-bucket' to test access to a specific
170 bucket instead of attempting to list them all.
171
172 -c FILE, --config=FILE
173 Config file name. Defaults to $HOME/.s3cfg
174
175 --dump-config
176 Dump current configuration after parsing config files and com‐
177 mand line options and exit.
178
179 --access_key=ACCESS_KEY
180 AWS Access Key
181
182 --secret_key=SECRET_KEY
183 AWS Secret Key
184
185 --access_token=ACCESS_TOKEN
186 AWS Access Token
187
188 -n, --dry-run
189 Only show what should be uploaded or downloaded but don't actu‐
190 ally do it. May still perform S3 requests to get bucket listings
191 and other information though (only for file transfer commands)
192
193 -s, --ssl
194 Use HTTPS connection when communicating with S3. (default)
195
196 --no-ssl
197 Don't use HTTPS.
198
199 -e, --encrypt
200 Encrypt files before uploading to S3.
201
202 --no-encrypt
203 Don't encrypt files.
204
205 -f, --force
206 Force overwrite and other dangerous operations.
207
208 --continue
209 Continue getting a partially downloaded file (only for [get]
210 command).
211
212 --continue-put
213 Continue uploading partially uploaded files or multipart upload
214 parts. Restarts/parts files that don't have matching size and
215 md5. Skips files/parts that do. Note: md5sum checks are not
216 always sufficient to check (part) file equality. Enable this at
217 your own risk.
218
219 --upload-id=UPLOAD_ID
220 UploadId for Multipart Upload, in case you want continue an
221 existing upload (equivalent to --continue- put) and there are
222 multiple partial uploads. Use s3cmd multipart [URI] to see what
223 UploadIds are associated with the given URI.
224
225 --skip-existing
226 Skip over files that exist at the destination (only for [get]
227 and [sync] commands).
228
229 -r, --recursive
230 Recursive upload, download or removal.
231
232 --check-md5
233 Check MD5 sums when comparing files for [sync]. (default)
234
235 --no-check-md5
236 Do not check MD5 sums when comparing files for [sync]. Only
237 size will be compared. May significantly speed up transfer but
238 may also miss some changed files.
239
240 -P, --acl-public
241 Store objects with ACL allowing read for anyone.
242
243 --acl-private
244 Store objects with default ACL allowing access for you only.
245
246 --acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
247 Grant stated permission to a given amazon user. Permission is
248 one of: read, write, read_acp, write_acp, full_control, all
249
250 --acl-revoke=PERMISSION:USER_CANONICAL_ID
251 Revoke stated permission for a given amazon user. Permission is
252 one of: read, write, read_acp, write_acp, full_control, all
253
254 -D NUM, --restore-days=NUM
255 Number of days to keep restored file available (only for
256 'restore' command).
257
258 --restore-priority=RESTORE_PRIORITY
259 Priority for restoring files from S3 Glacier (only for expedited
260
261 --delete-removed
262 Delete destination objects with no corresponding source file
263 [sync]
264
265 --no-delete-removed
266 Don't delete destination objects.
267
268 --delete-after
269 Perform deletes AFTER new uploads when delete-removed is enabled
270 [sync]
271
272 --delay-updates
273 *OBSOLETE* Put all updated files into place at end [sync]
274
275 --max-delete=NUM
276 Do not delete more than NUM files. [del] and [sync]
277
278 --limit=NUM
279 Limit number of objects returned in the response body (only for
280 [ls] and [la] commands)
281
282 --add-destination=ADDITIONAL_DESTINATIONS
283 Additional destination for parallel uploads, in addition to last
284 arg. May be repeated.
285
286 --delete-after-fetch
287 Delete remote objects after fetching to local file (only for
288 [get] and [sync] commands).
289
290 -p, --preserve
291 Preserve filesystem attributes (mode, ownership, timestamps).
292 Default for [sync] command.
293
294 --no-preserve
295 Don't store FS attributes
296
297 --exclude=GLOB
298 Filenames and paths matching GLOB will be excluded from sync
299
300 --exclude-from=FILE
301 Read --exclude GLOBs from FILE
302
303 --rexclude=REGEXP
304 Filenames and paths matching REGEXP (regular expression) will be
305 excluded from sync
306
307 --rexclude-from=FILE
308 Read --rexclude REGEXPs from FILE
309
310 --include=GLOB
311 Filenames and paths matching GLOB will be included even if pre‐
312 viously excluded by one of --(r)exclude(-from) patterns
313
314 --include-from=FILE
315 Read --include GLOBs from FILE
316
317 --rinclude=REGEXP
318 Same as --include but uses REGEXP (regular expression) instead
319 of GLOB
320
321 --rinclude-from=FILE
322 Read --rinclude REGEXPs from FILE
323
324 --files-from=FILE
325 Read list of source-file names from FILE. Use - to read from
326 stdin.
327
328 --region=REGION, --bucket-location=REGION
329 Region to create bucket in. As of now the regions are:
330 us-east-1, us-west-1, us-west-2, eu-west-1, eu- central-1,
331 ap-northeast-1, ap-southeast-1, ap- southeast-2, sa-east-1
332
333 --host=HOSTNAME
334 HOSTNAME:PORT for S3 endpoint (default: s3.amazonaws.com, alter‐
335 natives such as s3-eu- west-1.amazonaws.com). You should also
336 set --host- bucket.
337
338 --host-bucket=HOST_BUCKET
339 DNS-style bucket+hostname:port template for accessing a bucket
340 (default: %(bucket)s.s3.amazonaws.com)
341
342 --reduced-redundancy, --rr
343 Store object with 'Reduced redundancy'. Lower per-GB price.
344 [put, cp, mv]
345
346 --no-reduced-redundancy, --no-rr
347 Store object without 'Reduced redundancy'. Higher per- GB price.
348 [put, cp, mv]
349
350 --storage-class=CLASS
351 Store object with specified CLASS (STANDARD, STANDARD_IA, or
352 REDUCED_REDUNDANCY). Lower per-GB price. [put, cp, mv]
353
354 --access-logging-target-prefix=LOG_TARGET_PREFIX
355 Target prefix for access logs (S3 URI) (for [cfmodify] and
356 [accesslog] commands)
357
358 --no-access-logging
359 Disable access logging (for [cfmodify] and [accesslog] commands)
360
361 --default-mime-type=DEFAULT_MIME_TYPE
362 Default MIME-type for stored objects. Application default is
363 binary/octet-stream.
364
365 -M, --guess-mime-type
366 Guess MIME-type of files by their extension or mime magic. Fall
367 back to default MIME-Type as specified by --default-mime-type
368 option
369
370 --no-guess-mime-type
371 Don't guess MIME-type and use the default type instead.
372
373 --no-mime-magic
374 Don't use mime magic when guessing MIME-type.
375
376 -m MIME/TYPE, --mime-type=MIME/TYPE
377 Force MIME-type. Override both --default-mime-type and
378 --guess-mime-type.
379
380 --add-header=NAME:VALUE
381 Add a given HTTP header to the upload request. Can be used mul‐
382 tiple times. For instance set 'Expires' or 'Cache-Control' head‐
383 ers (or both) using this option.
384
385 --remove-header=NAME
386 Remove a given HTTP header. Can be used multiple times. For
387 instance, remove 'Expires' or 'Cache- Control' headers (or both)
388 using this option. [modify]
389
390 --server-side-encryption
391 Specifies that server-side encryption will be used when putting
392 objects. [put, sync, cp, modify]
393
394 --server-side-encryption-kms-id=KMS_KEY
395 Specifies the key id used for server-side encryption with AWS
396 KMS-Managed Keys (SSE-KMS) when putting objects. [put, sync, cp,
397 modify]
398
399 --encoding=ENCODING
400 Override autodetected terminal and filesystem encoding (charac‐
401 ter set). Autodetected: UTF-8
402
403 --add-encoding-exts=EXTENSIONs
404 Add encoding to these comma delimited extensions i.e.
405 (css,js,html) when uploading to S3 )
406
407 --verbatim
408 Use the S3 name as given on the command line. No pre- process‐
409 ing, encoding, etc. Use with caution!
410
411 --disable-multipart
412 Disable multipart upload on files bigger than --multi‐
413 part-chunk-size-mb
414
415 --multipart-chunk-size-mb=SIZE
416 Size of each chunk of a multipart upload. Files bigger than SIZE
417 are automatically uploaded as multithreaded- multipart, smaller
418 files are uploaded using the traditional method. SIZE is in
419 Mega-Bytes, default chunk size is 15MB, minimum allowed chunk
420 size is 5MB, maximum is 5GB.
421
422 --list-md5
423 Include MD5 sums in bucket listings (only for 'ls' command).
424
425 -H, --human-readable-sizes
426 Print sizes in human readable form (eg 1kB instead of 1234).
427
428 --ws-index=WEBSITE_INDEX
429 Name of index-document (only for [ws-create] command)
430
431 --ws-error=WEBSITE_ERROR
432 Name of error-document (only for [ws-create] command)
433
434 --expiry-date=EXPIRY_DATE
435 Indicates when the expiration rule takes effect. (only for
436 [expire] command)
437
438 --expiry-days=EXPIRY_DAYS
439 Indicates the number of days after object creation the expira‐
440 tion rule takes effect. (only for [expire] command)
441
442 --expiry-prefix=EXPIRY_PREFIX
443 Identifying one or more objects with the prefix to which the
444 expiration rule applies. (only for [expire] command)
445
446 --progress
447 Display progress meter (default on TTY).
448
449 --no-progress
450 Don't display progress meter (default on non-TTY).
451
452 --stats
453 Give some file-transfer stats.
454
455 --enable
456 Enable given CloudFront distribution (only for [cfmodify] com‐
457 mand)
458
459 --disable
460 Disable given CloudFront distribution (only for [cfmodify] com‐
461 mand)
462
463 --cf-invalidate
464 Invalidate the uploaded filed in CloudFront. Also see [cfinval]
465 command.
466
467 --cf-invalidate-default-index
468 When using Custom Origin and S3 static website, invalidate the
469 default index file.
470
471 --cf-no-invalidate-default-index-root
472 When using Custom Origin and S3 static website, don't invalidate
473 the path to the default index file.
474
475 --cf-add-cname=CNAME
476 Add given CNAME to a CloudFront distribution (only for [cfcre‐
477 ate] and [cfmodify] commands)
478
479 --cf-remove-cname=CNAME
480 Remove given CNAME from a CloudFront distribution (only for
481 [cfmodify] command)
482
483 --cf-comment=COMMENT
484 Set COMMENT for a given CloudFront distribution (only for
485 [cfcreate] and [cfmodify] commands)
486
487 --cf-default-root-object=DEFAULT_ROOT_OBJECT
488 Set the default root object to return when no object is speci‐
489 fied in the URL. Use a relative path, i.e. default/index.html
490 instead of /default/index.html or s3://bucket/default/index.html
491 (only for [cfcreate] and [cfmodify] commands)
492
493 -v, --verbose
494 Enable verbose output.
495
496 -d, --debug
497 Enable debug output.
498
499 --version
500 Show s3cmd version (2.0.2) and exit.
501
502 -F, --follow-symlinks
503 Follow symbolic links as if they are regular files
504
505 --cache-file=FILE
506 Cache FILE containing local source MD5 values
507
508 -q, --quiet
509 Silence output on stdout
510
511 --ca-certs=CA_CERTS_FILE
512 Path to SSL CA certificate FILE (instead of system default)
513
514 --check-certificate
515 Check SSL certificate validity
516
517 --no-check-certificate
518 Do not check SSL certificate validity
519
520 --check-hostname
521 Check SSL certificate hostname validity
522
523 --no-check-hostname
524 Do not check SSL certificate hostname validity
525
526 --signature-v2
527 Use AWS Signature version 2 instead of newer signature methods.
528 Helpful for S3-like systems that don't have AWS Signature v4
529 yet.
530
531 --limit-rate=LIMITRATE
532 Limit the upload or download speed to amount bytes per second.
533 Amount may be expressed in bytes, kilobytes with the k suffix,
534 or megabytes with the m suffix
535
536 --requester-pays
537 Set the REQUESTER PAYS flag for operations
538
539 -l, --long-listing
540 Produce long listing [ls]
541
542 --stop-on-error
543 stop if error in transfer
544
545 --content-disposition=CONTENT_DISPOSITION
546 Provide a Content-Disposition for signed URLs, e.g., "inline;
547 filename=myvideo.mp4"
548
549 --content-type=CONTENT_TYPE
550 Provide a Content-Type for signed URLs, e.g., "video/mp4"
551
552
553
555 One of the most powerful commands of s3cmd is s3cmd sync used for syn‐
556 chronising complete directory trees to or from remote S3 storage. To
557 some extent s3cmd put and s3cmd get share a similar behaviour with
558 sync.
559
560 Basic usage common in backup scenarios is as simple as:
561 s3cmd sync /local/path/ s3://test-bucket/backup/
562
563 This command will find all files under /local/path directory and copy
564 them to corresponding paths under s3://test-bucket/backup on the remote
565 side. For example:
566 /local/path/file1.ext -> s3://bucket/backup/file1.ext
567 /local/path/dir123/file2.bin -> s3://bucket/backup/dir123/file2.bin
568
569 However if the local path doesn't end with a slash the last directory's
570 name is used on the remote side as well. Compare these with the previ‐
571 ous example:
572 s3cmd sync /local/path s3://test-bucket/backup/
573 will sync:
574 /local/path/file1.ext -> s3://bucket/backup/path/file1.ext
575 /local/path/dir123/file2.bin -> s3://bucket/backup/path/dir123/file2.bin
576
577 To retrieve the files back from S3 use inverted syntax:
578 s3cmd sync s3://test-bucket/backup/ ~/restore/
579 that will download files:
580 s3://bucket/backup/file1.ext -> ~/restore/file1.ext
581 s3://bucket/backup/dir123/file2.bin -> ~/restore/dir123/file2.bin
582
583 Without the trailing slash on source the behaviour is similar to what
584 has been demonstrated with upload:
585 s3cmd sync s3://test-bucket/backup ~/restore/
586 will download the files as:
587 s3://bucket/backup/file1.ext -> ~/restore/backup/file1.ext
588 s3://bucket/backup/dir123/file2.bin -> ~/restore/backup/dir123/file2.bin
589
590 All source file names, the bold ones above, are matched against exclude
591 rules and those that match are then re-checked against include rules to
592 see whether they should be excluded or kept in the source list.
593
594 For the purpose of --exclude and --include matching only the bold file
595 names above are used. For instance only path/file1.ext is tested
596 against the patterns, not /local/path/file1.ext
597
598 Both --exclude and --include work with shell-style wildcards (a.k.a.
599 GLOB). For a greater flexibility s3cmd provides Regular-expression
600 versions of the two exclude options named --rexclude and --rinclude.
601 The options with ...-from suffix (eg --rinclude-from) expect a filename
602 as an argument. Each line of such a file is treated as one pattern.
603
604 There is only one set of patterns built from all --(r)exclude(-from)
605 options and similarly for include variant. Any file excluded with eg
606 --exclude can be put back with a pattern found in --rinclude-from list.
607
608 Run s3cmd with --dry-run to verify that your rules work as expected.
609 Use together with --debug get detailed information about matching file
610 names against exclude and include rules.
611
612 For example to exclude all files with ".jpg" extension except those
613 beginning with a number use:
614
615 --exclude '*.jpg' --rinclude '[0-9].*.jpg'
616
617 To exclude all files except "*.jpg" extension, use:
618
619 --exclude '*' --include '*.jpg'
620
621 To exclude local directory 'somedir', be sure to use a trailing forward
622 slash, as such:
623
624 --exclude 'somedir/'
625
627 For the most up to date list of options run: s3cmd --help
628 For more info about usage, examples and other related info visit
629 project homepage at: http://s3tools.org
630
632 Written by Michal Ludvig and contributors
633
635 Preferred way to get support is our mailing list:
636 s3tools-general@lists.sourceforge.net
637 or visit the project homepage:
638 http://s3tools.org
639
641 Report bugs to s3tools-bugs@lists.sourceforge.net
642
644 Copyright © 2007-2015 TGRMN Software - http://www.tgrmn.com - and con‐
645 tributors
646
648 This program is free software; you can redistribute it and/or modify it
649 under the terms of the GNU General Public License as published by the
650 Free Software Foundation; either version 2 of the License, or (at your
651 option) any later version. This program is distributed in the hope
652 that it will be useful, but WITHOUT ANY WARRANTY; without even the
653 implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PUR‐
654 POSE. See the GNU General Public License for more details.
655
656
657
658 s3cmd(1)