1
2
3s3cmd(1)                    General Commands Manual                   s3cmd(1)
4
5
6

NAME

8       s3cmd - tool for managing Amazon S3 storage space and Amazon CloudFront
9       content delivery network
10

SYNOPSIS

12       s3cmd [OPTIONS] COMMAND [PARAMETERS]
13

DESCRIPTION

15       s3cmd is a command line client for  copying  files  to/from  Amazon  S3
16       (Simple  Storage  Service)  and performing other related tasks, for in‐
17       stance creating and removing buckets, listing objects, etc.
18
19

COMMANDS

21       s3cmd can do several actions specified by the following commands.
22
23       s3cmd mb s3://BUCKET
24              Make bucket
25
26       s3cmd rb s3://BUCKET
27              Remove bucket
28
29       s3cmd ls [s3://BUCKET[/PREFIX]]
30              List objects or buckets
31
32       s3cmd la
33              List all object in all buckets
34
35       s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
36              Put file into bucket
37
38       s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
39              Get file from bucket
40
41       s3cmd del s3://BUCKET/OBJECT
42              Delete file from bucket
43
44       s3cmd rm s3://BUCKET/OBJECT
45              Delete file from bucket (alias for del)
46
47       s3cmd restore s3://BUCKET/OBJECT
48              Restore file from Glacier storage
49
50       s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or  s3://BUCKET[/PREFIX]  LO‐
51       CAL_DIR or s3://BUCKET[/PREFIX] s3://BUCKET[/PREFIX]
52              Synchronize a directory tree to S3 (checks files freshness using
53              size and md5 checksum, unless overridden by options, see below)
54
55       s3cmd du [s3://BUCKET[/PREFIX]]
56              Disk usage by buckets
57
58       s3cmd info s3://BUCKET[/OBJECT]
59              Get various information about Buckets or Files
60
61       s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
62              Copy object
63
64       s3cmd modify s3://BUCKET1/OBJECT
65              Modify object metadata
66
67       s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
68              Move object
69
70       s3cmd setacl s3://BUCKET[/OBJECT]
71              Modify Access control list for Bucket or Files
72
73       s3cmd setpolicy FILE s3://BUCKET
74              Modify Bucket Policy
75
76       s3cmd delpolicy s3://BUCKET
77              Delete Bucket Policy
78
79       s3cmd setcors FILE s3://BUCKET
80              Modify Bucket CORS
81
82       s3cmd delcors s3://BUCKET
83              Delete Bucket CORS
84
85       s3cmd payer s3://BUCKET
86              Modify Bucket Requester Pays policy
87
88       s3cmd multipart s3://BUCKET [Id]
89              Show multipart uploads
90
91       s3cmd abortmp s3://BUCKET/OBJECT Id
92              Abort a multipart upload
93
94       s3cmd listmp s3://BUCKET/OBJECT Id
95              List parts of a multipart upload
96
97       s3cmd accesslog s3://BUCKET
98              Enable/disable bucket access logging
99
100       s3cmd sign STRING-TO-SIGN
101              Sign arbitrary string using the secret key
102
103       s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset>
104              Sign an S3 URL to provide limited public access with expiry
105
106       s3cmd fixbucket s3://BUCKET[/PREFIX]
107              Fix invalid file names in a bucket
108
109       s3cmd expire s3://BUCKET
110              Set or delete expiration rule for the bucket
111
112       s3cmd setlifecycle FILE s3://BUCKET
113              Upload a lifecycle policy for the bucket
114
115       s3cmd getlifecycle s3://BUCKET
116              Get a lifecycle policy for the bucket
117
118       s3cmd dellifecycle s3://BUCKET
119              Remove a lifecycle policy for the bucket
120
121       s3cmd setnotification FILE s3://BUCKET
122              Upload a notification policy for the bucket
123
124       s3cmd getnotification s3://BUCKET
125              Get a notification policy for the bucket
126
127       s3cmd delnotification s3://BUCKET
128              Remove a notification policy for the bucket
129
130
131
132       Commands for static WebSites configuration
133
134       s3cmd ws-create s3://BUCKET
135              Create Website from bucket
136
137       s3cmd ws-delete s3://BUCKET
138              Delete Website
139
140       s3cmd ws-info s3://BUCKET
141              Info about Website
142
143
144
145       Commands for CloudFront management
146
147       s3cmd cflist
148              List CloudFront distribution points
149
150       s3cmd cfinfo [cf://DIST_ID]
151              Display CloudFront distribution point parameters
152
153       s3cmd cfcreate s3://BUCKET
154              Create CloudFront distribution point
155
156       s3cmd cfdelete cf://DIST_ID
157              Delete CloudFront distribution point
158
159       s3cmd cfmodify cf://DIST_ID
160              Change CloudFront distribution point parameters
161
162       s3cmd cfinvalinfo cf://DIST_ID[/INVAL_ID]
163              Display CloudFront invalidation request(s) status
164
165
166

OPTIONS

168       Some of the below specified options can have their default  values  set
169       in  s3cmd  config file (by default $HOME/.s3cmd). As it's a simple text
170       file feel free to open it with your favorite text  editor  and  do  any
171       changes you like.
172
173       -h, --help
174              show this help message and exit
175
176       --configure
177              Invoke  interactive  (re)configuration  tool.  Optionally use as
178              '--configure s3://some-bucket' to  test  access  to  a  specific
179              bucket instead of attempting to list them all.
180
181       -c FILE, --config=FILE
182              Config file name. Defaults to $HOME/.s3cfg
183
184       --dump-config
185              Dump  current  configuration after parsing config files and com‐
186              mand line options and exit.
187
188       --access_key=ACCESS_KEY
189              AWS Access Key
190
191       --secret_key=SECRET_KEY
192              AWS Secret Key
193
194       --access_token=ACCESS_TOKEN
195              AWS Access Token
196
197       -n, --dry-run
198              Only show what should be uploaded or downloaded but don't  actu‐
199              ally do it. May still perform S3 requests to get bucket listings
200              and other information though (only for file transfer commands)
201
202       -s, --ssl
203              Use HTTPS connection when communicating with S3.  (default)
204
205       --no-ssl
206              Don't use HTTPS.
207
208       -e, --encrypt
209              Encrypt files before uploading to S3.
210
211       --no-encrypt
212              Don't encrypt files.
213
214       -f, --force
215              Force overwrite and other dangerous operations.
216
217       --continue
218              Continue getting a partially downloaded  file  (only  for  [get]
219              command).
220
221       --continue-put
222              Continue  uploading partially uploaded files or multipart upload
223              parts.  Restarts parts/files that don't have matching  size  and
224              md5.   Skips  files/parts  that do.  Note: md5sum checks are not
225              always sufficient to check (part) file equality.  Enable this at
226              your own risk.
227
228       --upload-id=UPLOAD_ID
229              UploadId  for Multipart Upload, in case you want continue an ex‐
230              isting upload (equivalent to --continue- put) and there are mul‐
231              tiple  partial  uploads.   Use s3cmd multipart [URI] to see what
232              UploadIds are associated with the given URI.
233
234       --skip-existing
235              Skip over files that exist at the destination  (only  for  [get]
236              and [sync] commands).
237
238       -r, --recursive
239              Recursive upload, download or removal.
240
241       --check-md5
242              Check MD5 sums when comparing files for [sync].  (default)
243
244       --no-check-md5
245              Do  not  check  MD5  sums when comparing files for [sync].  Only
246              size will be compared. May significantly speed up  transfer  but
247              may also miss some changed files.
248
249       -P, --acl-public
250              Store objects with ACL allowing read for anyone.
251
252       --acl-private
253              Store objects with default ACL allowing access for you only.
254
255       --acl-grant=PERMISSION:EMAIL or USER_CANONICAL_ID
256              Grant  stated  permission to a given amazon user.  Permission is
257              one of: read, write, read_acp, write_acp, full_control, all
258
259       --acl-revoke=PERMISSION:USER_CANONICAL_ID
260              Revoke stated permission for a given amazon user.  Permission is
261              one of: read, write, read_acp, write_acp, full_control, all
262
263       -D NUM, --restore-days=NUM
264              Number  of  days  to keep restored file available (only for 're‐
265              store' command). Default is 1 day.
266
267       --restore-priority=RESTORE_PRIORITY
268              Priority for restoring files from S3 Glacier (only for expedited
269
270       --delete-removed
271              Delete destination objects with  no  corresponding  source  file
272              [sync]
273
274       --no-delete-removed
275              Don't delete destination objects [sync]
276
277       --delete-after
278              Perform deletes AFTER new uploads when delete-removed is enabled
279              [sync]
280
281       --delay-updates
282              *OBSOLETE* Put all updated files into place at end [sync]
283
284       --max-delete=NUM
285              Do not delete more than NUM files. [del] and [sync]
286
287       --limit=NUM
288              Limit number of objects returned in the response body (only  for
289              [ls] and [la] commands)
290
291       --add-destination=ADDITIONAL_DESTINATIONS
292              Additional destination for parallel uploads, in addition to last
293              arg.  May be repeated.
294
295       --delete-after-fetch
296              Delete remote objects after fetching to  local  file  (only  for
297              [get] and [sync] commands).
298
299       -p, --preserve
300              Preserve  filesystem  attributes  (mode, ownership, timestamps).
301              Default for [sync] command.
302
303       --no-preserve
304              Don't store FS attributes
305
306       --exclude=GLOB
307              Filenames and paths matching GLOB will be excluded from sync
308
309       --exclude-from=FILE
310              Read --exclude GLOBs from FILE
311
312       --rexclude=REGEXP
313              Filenames and paths matching REGEXP (regular expression) will be
314              excluded from sync
315
316       --rexclude-from=FILE
317              Read --rexclude REGEXPs from FILE
318
319       --include=GLOB
320              Filenames  and paths matching GLOB will be included even if pre‐
321              viously excluded by one of --(r)exclude(-from) patterns
322
323       --include-from=FILE
324              Read --include GLOBs from FILE
325
326       --rinclude=REGEXP
327              Same as --include but uses REGEXP (regular  expression)  instead
328              of GLOB
329
330       --rinclude-from=FILE
331              Read --rinclude REGEXPs from FILE
332
333       --files-from=FILE
334              Read  list  of  source-file  names from FILE. Use - to read from
335              stdin.
336
337       --region=REGION, --bucket-location=REGION
338              Region  to  create  bucket  in.  As  of  now  the  regions  are:
339              us-east-1,   us-west-1,  us-west-2,  eu-west-1,  eu-  central-1,
340              ap-northeast-1, ap-southeast-1, ap- southeast-2, sa-east-1
341
342       --host=HOSTNAME
343              HOSTNAME:PORT for S3 endpoint (default: s3.amazonaws.com, alter‐
344              natives  such  as  s3-eu- west-1.amazonaws.com). You should also
345              set --host- bucket.
346
347       --host-bucket=HOST_BUCKET
348              DNS-style bucket+hostname:port template for accessing  a  bucket
349              (default: %(bucket)s.s3.amazonaws.com)
350
351       --reduced-redundancy, --rr
352              Store  object  with  'Reduced  redundancy'.  Lower per-GB price.
353              [put, cp, mv]
354
355       --no-reduced-redundancy, --no-rr
356              Store object without 'Reduced redundancy'. Higher per- GB price.
357              [put, cp, mv]
358
359       --storage-class=CLASS
360              Store  object  with specified CLASS (STANDARD, STANDARD_IA, ONE‐
361              ZONE_IA, INTELLIGENT_TIERING, GLACIER  or  DEEP_ARCHIVE).  [put,
362              cp, mv]
363
364       --access-logging-target-prefix=LOG_TARGET_PREFIX
365              Target  prefix for access logs (S3 URI) (for [cfmodify] and [ac‐
366              cesslog] commands)
367
368       --no-access-logging
369              Disable access logging (for [cfmodify] and [accesslog] commands)
370
371       --default-mime-type=DEFAULT_MIME_TYPE
372              Default MIME-type for stored objects. Application default is bi‐
373              nary/octet-stream.
374
375       -M, --guess-mime-type
376              Guess  MIME-type of files by their extension or mime magic. Fall
377              back to default MIME-Type as  specified  by  --default-mime-type
378              option
379
380       --no-guess-mime-type
381              Don't guess MIME-type and use the default type instead.
382
383       --no-mime-magic
384              Don't use mime magic when guessing MIME-type.
385
386       -m MIME/TYPE, --mime-type=MIME/TYPE
387              Force   MIME-type.   Override   both   --default-mime-type   and
388              --guess-mime-type.
389
390       --add-header=NAME:VALUE
391              Add a given HTTP header to the upload request. Can be used  mul‐
392              tiple times. For instance set 'Expires' or 'Cache-Control' head‐
393              ers (or both) using this option.
394
395       --remove-header=NAME
396              Remove a given HTTP header.  Can be used  multiple  times.   For
397              instance, remove 'Expires' or 'Cache- Control' headers (or both)
398              using this option. [modify]
399
400       --server-side-encryption
401              Specifies that server-side encryption will be used when  putting
402              objects. [put, sync, cp, modify]
403
404       --server-side-encryption-kms-id=KMS_KEY
405              Specifies  the  key  id used for server-side encryption with AWS
406              KMS-Managed Keys (SSE-KMS) when putting objects. [put, sync, cp,
407              modify]
408
409       --encoding=ENCODING
410              Override  autodetected terminal and filesystem encoding (charac‐
411              ter set). Autodetected: UTF-8
412
413       --add-encoding-exts=EXTENSIONs
414              Add  encoding  to  these   comma   delimited   extensions   i.e.
415              (css,js,html) when uploading to S3 )
416
417       --verbatim
418              Use  the  S3 name as given on the command line. No pre- process‐
419              ing, encoding, etc. Use with caution!
420
421       --disable-multipart
422              Disable  multipart  upload  on  files   bigger   than   --multi‐
423              part-chunk-size-mb
424
425       --multipart-chunk-size-mb=SIZE
426              Size of each chunk of a multipart upload. Files bigger than SIZE
427              are automatically uploaded as multithreaded- multipart,  smaller
428              files  are  uploaded  using  the  traditional method. SIZE is in
429              Mega-Bytes, default chunk size is 15MB,  minimum  allowed  chunk
430              size is 5MB, maximum is 5GB.
431
432       --list-md5
433              Include MD5 sums in bucket listings (only for 'ls' command).
434
435       --list-allow-unordered
436              Not an AWS standard. Allow the listing results to be returned in
437              unsorted order. This may be faster when listing very large buck‐
438              ets.
439
440       -H, --human-readable-sizes
441              Print sizes in human readable form (eg 1kB instead of 1234).
442
443       --ws-index=WEBSITE_INDEX
444              Name of index-document (only for [ws-create] command)
445
446       --ws-error=WEBSITE_ERROR
447              Name of error-document (only for [ws-create] command)
448
449       --expiry-date=EXPIRY_DATE
450              Indicates  when the expiration rule takes effect. (only for [ex‐
451              pire] command)
452
453       --expiry-days=EXPIRY_DAYS
454              Indicates the number of days after object creation  the  expira‐
455              tion rule takes effect. (only for [expire] command)
456
457       --expiry-prefix=EXPIRY_PREFIX
458              Identifying one or more objects with the prefix to which the ex‐
459              piration rule applies. (only for [expire] command)
460
461       --progress
462              Display progress meter (default on TTY).
463
464       --no-progress
465              Don't display progress meter (default on non-TTY).
466
467       --stats
468              Give some file-transfer stats.
469
470       --enable
471              Enable given CloudFront distribution (only for  [cfmodify]  com‐
472              mand)
473
474       --disable
475              Disable  given CloudFront distribution (only for [cfmodify] com‐
476              mand)
477
478       --cf-invalidate
479              Invalidate the uploaded filed in CloudFront. Also see  [cfinval]
480              command.
481
482       --cf-invalidate-default-index
483              When  using  Custom Origin and S3 static website, invalidate the
484              default index file.
485
486       --cf-no-invalidate-default-index-root
487              When using Custom Origin and S3 static website, don't invalidate
488              the path to the default index file.
489
490       --cf-add-cname=CNAME
491              Add  given  CNAME to a CloudFront distribution (only for [cfcre‐
492              ate] and [cfmodify] commands)
493
494       --cf-remove-cname=CNAME
495              Remove given CNAME from a CloudFront distribution (only for [cf‐
496              modify] command)
497
498       --cf-comment=COMMENT
499              Set  COMMENT  for  a  given  CloudFront  distribution  (only for
500              [cfcreate] and [cfmodify] commands)
501
502       --cf-default-root-object=DEFAULT_ROOT_OBJECT
503              Set the default root object to return when no object  is  speci‐
504              fied  in  the URL. Use a relative path, i.e.  default/index.html
505              instead of /default/index.html or s3://bucket/default/index.html
506              (only for [cfcreate] and [cfmodify] commands)
507
508       -v, --verbose
509              Enable verbose output.
510
511       -d, --debug
512              Enable debug output.
513
514       --version
515              Show s3cmd version (2.3.0) and exit.
516
517       -F, --follow-symlinks
518              Follow symbolic links as if they are regular files
519
520       --cache-file=FILE
521              Cache FILE containing local source MD5 values
522
523       -q, --quiet
524              Silence output on stdout
525
526       --ca-certs=CA_CERTS_FILE
527              Path to SSL CA certificate FILE (instead of system default)
528
529       --ssl-cert=SSL_CLIENT_CERT_FILE
530              Path to client own SSL certificate CRT_FILE
531
532       --ssl-key=SSL_CLIENT_KEY_FILE
533              Path to client own SSL certificate private key KEY_FILE
534
535       --check-certificate
536              Check SSL certificate validity
537
538       --no-check-certificate
539              Do not check SSL certificate validity
540
541       --check-hostname
542              Check SSL certificate hostname validity
543
544       --no-check-hostname
545              Do not check SSL certificate hostname validity
546
547       --signature-v2
548              Use  AWS Signature version 2 instead of newer signature methods.
549              Helpful for S3-like systems that don't  have  AWS  Signature  v4
550              yet.
551
552       --limit-rate=LIMITRATE
553              Limit  the  upload or download speed to amount bytes per second.
554              Amount may be expressed in bytes, kilobytes with the  k  suffix,
555              or megabytes with the m suffix
556
557       --no-connection-pooling
558              Disable connection re-use
559
560       --requester-pays
561              Set the REQUESTER PAYS flag for operations
562
563       -l, --long-listing
564              Produce long listing [ls]
565
566       --stop-on-error
567              stop if error in transfer
568
569       --content-disposition=CONTENT_DISPOSITION
570              Provide  a  Content-Disposition  for signed URLs, e.g., "inline;
571              filename=myvideo.mp4"
572
573       --content-type=CONTENT_TYPE
574              Provide a Content-Type for signed URLs, e.g., "video/mp4"
575
576
577

EXAMPLES

579       One of the most powerful commands of s3cmd is s3cmd sync used for  syn‐
580       chronising  complete  directory  trees to or from remote S3 storage. To
581       some extent s3cmd put and s3cmd get  share  a  similar  behaviour  with
582       sync.
583
584       Basic usage common in backup scenarios is as simple as:
585            s3cmd sync /local/path/ s3://test-bucket/backup/
586
587       This  command  will find all files under /local/path directory and copy
588       them to corresponding paths under s3://test-bucket/backup on the remote
589       side.  For example:
590            /local/path/file1.ext         ->  s3://bucket/backup/file1.ext
591            /local/path/dir123/file2.bin  ->  s3://bucket/backup/dir123/file2.bin
592
593       However if the local path doesn't end with a slash the last directory's
594       name is used on the remote side as well. Compare these with the  previ‐
595       ous example:
596            s3cmd sync /local/path s3://test-bucket/backup/
597       will sync:
598            /local/path/file1.ext         ->  s3://bucket/backup/path/file1.ext
599            /local/path/dir123/file2.bin  ->  s3://bucket/backup/path/dir123/file2.bin
600
601       To retrieve the files back from S3 use inverted syntax:
602            s3cmd sync s3://test-bucket/backup/ ~/restore/
603       that will download files:
604            s3://bucket/backup/file1.ext         ->  ~/restore/file1.ext
605            s3://bucket/backup/dir123/file2.bin  ->  ~/restore/dir123/file2.bin
606
607       Without  the  trailing slash on source the behaviour is similar to what
608       has been demonstrated with upload:
609            s3cmd sync s3://test-bucket/backup ~/restore/
610       will download the files as:
611            s3://bucket/backup/file1.ext         ->  ~/restore/backup/file1.ext
612            s3://bucket/backup/dir123/file2.bin  ->  ~/restore/backup/dir123/file2.bin
613
614       All source file names, the bold ones above, are matched against exclude
615       rules and those that match are then re-checked against include rules to
616       see whether they should be excluded or kept in the source list.
617
618       For the purpose of --exclude and --include matching only the bold  file
619       names  above  are  used.  For  instance  only  path/file1.ext is tested
620       against the patterns, not /local/path/file1.ext
621
622       Both --exclude and --include work with  shell-style  wildcards  (a.k.a.
623       GLOB).   For  a  greater  flexibility s3cmd provides Regular-expression
624       versions of the two exclude options named  --rexclude  and  --rinclude.
625       The options with ...-from suffix (eg --rinclude-from) expect a filename
626       as an argument. Each line of such a file is treated as one pattern.
627
628       There is only one set of patterns built  from  all  --(r)exclude(-from)
629       options  and  similarly  for include variant. Any file excluded with eg
630       --exclude can be put back with a pattern found in --rinclude-from list.
631
632       Run s3cmd with --dry-run to verify that your rules  work  as  expected.
633       Use  together with --debug get detailed information about matching file
634       names against exclude and include rules.
635
636       For example to exclude all files with ".jpg" extension except those be‐
637       ginning with a number use:
638
639            --exclude '*.jpg' --rinclude '[0-9].*.jpg'
640
641       To exclude all files except "*.jpg" extension, use:
642
643            --exclude '*' --include '*.jpg'
644
645       To exclude local directory 'somedir', be sure to use a trailing forward
646       slash, as such:
647
648            --exclude 'somedir/'
649

SEE ALSO

651       For the most up to date list of options run: s3cmd --help
652       For more info about  usage,  examples  and  other  related  info  visit
653       project homepage at: http://s3tools.org
654

AUTHOR

656       Written by Michal Ludvig and contributors
657

CONTACT, SUPPORT

659       Preferred way to get support is our mailing list:
660       s3tools-general@lists.sourceforge.net
661       or visit the project homepage:
662       http://s3tools.org
663

REPORTING BUGS

665       Report bugs to s3tools-bugs@lists.sourceforge.net
666
668       Copyright  © 2007-2015 TGRMN Software - http://www.tgrmn.com - and con‐
669       tributors
670

LICENSE

672       This program is free software; you can redistribute it and/or modify it
673       under  the  terms of the GNU General Public License as published by the
674       Free Software Foundation; either version 2 of the License, or (at  your
675       option)  any  later  version.   This program is distributed in the hope
676       that it will be useful, but WITHOUT ANY WARRANTY; without even the  im‐
677       plied  warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
678       See the GNU General Public License for more details.
679
680
681
682                                                                      s3cmd(1)
Impressum