1S3(n) Amazon S3 Web Service Utilities S3(n)
2
3
4
5______________________________________________________________________________
6
8 S3 - Amazon S3 Web Service Interface
9
11 package require Tcl 8.5
12
13 package require S3 ?1.0.3?
14
15 package require sha1 1.0
16
17 package require md5 2.0
18
19 package require base64 2.3
20
21 package require xsxp 1.0
22
23 S3::Configure ?-reset boolean? ?-retries integer? ?-accesskeyid
24 idstring? ?-secretaccesskey idstring? ?-service-access-point FQDN?
25 ?-use-tls boolean? ?-default-compare always|never|exists|miss‐
26 ing|newer|date|checksum|different? ?-default-separator string?
27 ?-default-acl private|public-read|public-read-write|authenticated-
28 read|keep|calc? ?-default-bucket bucketname?
29
30 S3::SuggestBucket ?name?
31
32 S3::REST dict
33
34 S3::ListAllMyBuckets ?-blocking boolean? ?-parse-xml xmlstring?
35 ?-result-type REST|xml|pxml|dict|names|owner?
36
37 S3::PutBucket ?-bucket bucketname? ?-blocking boolean? ?-acl {}|pri‐
38 vate|public-read|public-read-write|authenticated-read?
39
40 S3::DeleteBucket ?-bucket bucketname? ?-blocking boolean?
41
42 S3::GetBucket ?-bucket bucketname? ?-blocking boolean? ?-parse-xml xml‐
43 string? ?-max-count integer? ?-prefix prefixstring? ?-delimiter delim‐
44 iterstring? ?-result-type REST|xml|pxml|names|dict?
45
46 S3::Put ?-bucket bucketname? -resource resourcename ?-blocking boolean?
47 ?-file filename? ?-content contentstring? ?-acl private|public-
48 read|public-read-write|authenticated-read|calc|keep? ?-content-type
49 contenttypestring? ?-x-amz-meta-* metadatatext? ?-compare comparemode?
50
51 S3::Get ?-bucket bucketname? -resource resourcename ?-blocking boolean?
52 ?-compare comparemode? ?-file filename? ?-content contentvarname?
53 ?-timestamp aws|now? ?-headers headervarname?
54
55 S3::Head ?-bucket bucketname? -resource resourcename ?-blocking bool‐
56 ean? ?-dict dictvarname? ?-headers headersvarname? ?-status statusvar‐
57 name?
58
59 S3::GetAcl ?-blocking boolean? ?-bucket bucketname? -resource resource‐
60 name ?-result-type REST|xml|pxml?
61
62 S3::PutAcl ?-blocking boolean? ?-bucket bucketname? -resource resource‐
63 name ?-acl new-acl?
64
65 S3::Delete ?-bucket bucketname? -resource resourcename ?-blocking bool‐
66 ean? ?-status statusvar?
67
68 S3::Push ?-bucket bucketname? -directory directoryname ?-prefix pre‐
69 fixstring? ?-compare comparemode? ?-x-amz-meta-* metastring? ?-acl
70 aclcode? ?-delete boolean? ?-error throw|break|continue? ?-progress
71 scriptprefix?
72
73 S3::Pull ?-bucket bucketname? -directory directoryname ?-prefix pre‐
74 fixstring? ?-blocking boolean? ?-compare comparemode? ?-delete boolean?
75 ?-timestamp aws|now? ?-error throw|break|continue? ?-progress script‐
76 prefix?
77
78 S3::Toss ?-bucket bucketname? -prefix prefixstring ?-blocking boolean?
79 ?-error throw|break|continue? ?-progress scriptprefix?
80
81______________________________________________________________________________
82
84 This package provides access to Amazon's Simple Storage Solution web
85 service.
86
87 As a quick summary, Amazon Simple Storage Solution provides a for-fee
88 web service allowing the storage of arbitrary data as "resources"
89 within "buckets" online. See http://www.amazonaws.com/ for details on
90 that system. Access to the service is via HTTP (SOAP or REST). Much
91 of this documentation will not make sense if you're not familiar with
92 the terms and functionality of the Amazon S3 service.
93
94 This package provides services for reading and writing the data items
95 via the REST interface. It also provides some higher-level operations.
96 Other packages in the same distribution provide for even more function‐
97 ality.
98
99 Copyright 2006 Darren New. All Rights Reserved. NO WARRANTIES OF ANY
100 TYPE ARE PROVIDED. COPYING OR USE INDEMNIFIES THE AUTHOR IN ALL WAYS.
101 This software is licensed under essentially the same terms as Tcl. See
102 LICENSE.txt for the terms.
103
105 The error reporting from this package makes use of $errorCode to pro‐
106 vide more details on what happened than simply throwing an error. Any
107 error caught by the S3 package (and we try to catch them all) will
108 return with an $errorCode being a list having at least three elements.
109 In all cases, the first element will be "S3". The second element will
110 take on one of six values, with that element defining the value of the
111 third and subsequent elements. S3::REST does not throw an error, but
112 rather returns a dictionary with the keys "error", "errorInfo", and
113 "errorCode" set. This allows for reliable background use. The possible
114 second elements are these:
115
116 usage The usage of the package is incorrect. For example, a command
117 has been invoked which requires the library to be configured
118 before the library has been configured, or an invalid combina‐
119 tion of options has been specified. The third element of $error‐
120 Code supplies the name of the parameter that was wrong. The
121 fourth usually provides the arguments that were actually sup‐
122 plied to the throwing proc, unless the usage error isn't con‐
123 fined to a single proc.
124
125 local Something happened on the local system which threw an error. For
126 example, a request to upload or download a file was made and the
127 file permissions denied that sort of access. The third element
128 of $errorCode is the original $errorCode.
129
130 socket Something happened with the socket. It closed prematurely, or
131 some other condition of failure-to-communicate-with-Amazon was
132 detected. The third element of $errorCode is the original
133 $errorCode, or sometimes the message from fcopy, or ...?
134
135 remote The Amazon web service returned an error code outside the 2xx
136 range in the HTTP header. In other words, everything went as
137 documented, except this particular case was documented not to
138 work. The third element is the dictionary returned from
139 ::S3::REST. Note that S3::REST itself never throws this error,
140 but just returns the dictionary. Most of the higher-level com‐
141 mands throw for convenience, unless an argument indicates they
142 should not. If something is documented as "not throwing an S3
143 remote error", it means a status return is set rather than
144 throwing an error if Amazon returns a non-2XX HTTP result code.
145
146 notyet The user obeyed the documentation, but the author has not yet
147 gotten around to implementing this feature. (Right now, only TLS
148 support and sophisticated permissions fall into this category,
149 as well as the S3::Acl command.)
150
151 xml The service has returned invalid XML, or XML whose schema is
152 unexpected. For the high-level commands that accept service XML
153 as input for parsing, this may also be thrown.
154
156 This package provides several separate levels of complexity.
157
158 · The lowest level simply takes arguments to be sent to the ser‐
159 vice, sends them, retrieves the result, and provides it to the
160 caller. Note: This layer allows both synchronous and event-
161 driven processing. It depends on the MD5 and SHA1 and base64
162 packages from Tcllib (available at http://core.tcl.tk/tcllib/).
163 Note that S3::Configure is required for S3::REST to work due to
164 the authentication portion, so we put that in the "lowest
165 level."
166
167 · The next layer parses the results of calls, allowing for func‐
168 tionality such as uploading only changed files, synchronizing
169 directories, and so on. This layer depends on the TclXML pack‐
170 age as well as the included xsxp package. These packages are
171 package required when these more-sophisticated routines are
172 called, so nothing breaks if they are not correctly installed.
173
174 · Also included is a separate program that uses the library. It
175 provides code to parse $argv0 and $argv from the command line,
176 allowing invocation as a tclkit, etc. (Not yet implmented.)
177
178 · Another separate program provides a GUI interface allowing drag-
179 and-drop and other such functionality. (Not yet implemented.)
180
181 · Also built on this package is the OddJob program. It is a sepa‐
182 rate program designed to allow distribution of computational
183 work units over Amazon's Elastic Compute Cloud web service.
184
185 The goal is to have at least the bottom-most layers implemented in pure
186 Tcl using only that which comes from widely-available sources, such as
187 Tcllib.
188
190 These commands do not require any packages not listed above. They talk
191 directly to the service, or they are utility or configuration routines.
192 Note that the "xsxp" package was written to support this package, so it
193 should be available wherever you got this package.
194
195 S3::Configure ?-reset boolean? ?-retries integer? ?-accesskeyid
196 idstring? ?-secretaccesskey idstring? ?-service-access-point FQDN?
197 ?-use-tls boolean? ?-default-compare always|never|exists|miss‐
198 ing|newer|date|checksum|different? ?-default-separator string?
199 ?-default-acl private|public-read|public-read-write|authenticated-
200 read|keep|calc? ?-default-bucket bucketname?
201 There is one command for configuration, and that is S3::Config‐
202 ure. If called with no arguments, it returns a dictionary of
203 key/value pairs listing all current settings. If called with
204 one argument, it returns the value of that single argument. If
205 called with two or more arguments, it must be called with pairs
206 of arguments, and it applies the changes in order. There is
207 only one set of configuration information per interpreter.
208
209 The following options are accepted:
210
211 -reset boolean
212 By default, false. If true, any previous changes and any
213 changes on the same call before the reset option will be
214 returned to default values.
215
216 -retries integer
217 Default value is 3. If Amazon returns a 500 error, a
218 retry after an exponential backoff delay will be tried
219 this many times before finally throwing the 500 error.
220 This applies to each call to S3::REST from the higher-
221 level commands, but not to S3::REST itself. That is,
222 S3::REST will always return httpstatus 500 if that's what
223 it receives. Functions like S3::Put will retry the PUT
224 call, and will also retry the GET and HEAD calls used to
225 do content comparison. Changing this to 0 will prevent
226 retries and their associated delays. In addition, socket
227 errors (i.e., errors whose errorCode starts with "S3
228 socket") will be similarly retried after backoffs.
229
230 -accesskeyid idstring
231
232 -secretaccesskey idstring
233 Each defaults to an empty string. These must be set
234 before any calls are made. This is your S3 ID. Once you
235 sign up for an account, go to http://www.amazonaws.com/,
236 sign in, go to the "Your Web Services Account" button,
237 pick "AWS Access Identifiers", and your access key ID and
238 secret access keys will be available. All S3::REST calls
239 are authenticated. Blame Amazon for the poor choice of
240 names.
241
242 -service-access-point FQDN
243 Defaults to "s3.amazonaws.com". This is the fully-quali‐
244 fied domain name of the server to contact for S3::REST
245 calls. You should probably never need to touch this,
246 unless someone else implements a compatible service, or
247 you wish to test something by pointing the library at
248 your own service.
249
250 -slop-seconds integer
251 When comparing dates between Amazon and the local
252 machine, two dates within this many seconds of each other
253 are considered the same. Useful for clock drift correc‐
254 tion, processing overhead time, and so on.
255
256 -use-tls boolean
257 Defaults to false. This is not yet implemented. If true,
258 S3::REST will negotiate a TLS connection to Amazon. If
259 false, unencrypted connections are used.
260
261 -bucket-prefix string
262 Defaults to "TclS3". This string is used by S3::Suggest‐
263 BucketName if that command is passed an empty string as
264 an argument. It is used to distinguish different applica‐
265 tions using the Amazon service. Your application should
266 always set this to keep from interfering with the buckets
267 of other users of Amazon S3 or with other buckets of the
268 same user.
269
270 -default-compare always|never|exists|missing|newer|date|check‐
271 sum|different
272 Defaults to "always." If no -compare is specified on
273 S3::Put, S3::Get, or S3::Delete, this comparison is used.
274 See those commands for a description of the meaning.
275
276 -default-separator string
277 Defaults to "/". This is currently unused. It might make
278 sense to use this for S3::Push and S3::Pull, but allowing
279 resources to have slashes in their names that aren't
280 marking directories would be problematic. Hence, this
281 currently does nothing.
282
283 -default-acl private|public-read|public-read-write|authenti‐
284 cated-read|keep|calc
285 Defaults to an empty string. If no -acl argument is pro‐
286 vided to S3::Put or S3::Push, this string is used (given
287 as the x-amz-acl header if not keep or calc). If this is
288 also empty, no x-amz-acl header is generated. This is
289 not used by S3::REST.
290
291 -default-bucket bucketname
292 If no bucket is given to S3::GetBucket, S3::PutBucket,
293 S3::Get, S3::Put, S3::Head, S3::Acl, S3::Delete,
294 S3::Push, S3::Pull, or S3::Toss, and if this configura‐
295 tion variable is not an empty string (and not simply
296 "/"), then this value will be used for the bucket. This
297 is useful if one program does a large amount of resource
298 manipulation within a single bucket.
299
300
301 S3::SuggestBucket ?name?
302 The S3::SuggestBucket command accepts an optional string as a
303 prefix and returns a valid bucket containing the name argument
304 and the Access Key ID. This makes the name unique to the owner
305 and to the application (assuming the application picks a good
306 name argument). If no name is provided, the name from S3::Con‐
307 figure -bucket-prefix is used. If that too is empty (which is
308 not the default), an error is thrown.
309
310 S3::REST dict
311 The S3::REST command takes as an argument a dictionary and
312 returns a dictionary. The return dictionary has the same keys
313 as the input dictionary, and includes additional keys as the
314 result. The presence or absence of keys in the input dictionary
315 can control the behavior of the routine. It never throws an
316 error directly, but includes keys "error", "errorInfo", and
317 "errorCode" if necessary. Some keys are required, some
318 optional. The routine can run either in blocking or non-blocking
319 mode, based on the presense of resultvar in the input dictio‐
320 nary. This requires the -accesskeyid and -secretaccesskey to be
321 configured via S3::Configure before being called.
322
323 The possible input keys are these:
324
325 verb GET|PUT|DELETE|HEAD
326 This required item indicates the verb to be used.
327
328 resource string
329 This required item indicates the resource to be accessed.
330 A leading / is added if not there already. It will be
331 URL-encoded for you if necessary. Do not supply a
332 resource name that is already URL-encoded.
333
334 ?rtype torrent|acl?
335 This indicates a torrent or acl resource is being manipu‐
336 lated. Do not include this in the resource key, or the
337 "?" separator will get URL-encoded.
338
339 ?parameters dict?
340 This optional dictionary provides parameters added to the
341 URL for the transaction. The keys must be in the correct
342 case (which is confusing in the Amazon documentation) and
343 the values must be valid. This can be an empty dictionary
344 or omitted entirely if no parameters are desired. No
345 other error checking on parameters is performed.
346
347 ?headers dict?
348 This optional dictionary provides headers to be added to
349 the HTTP request. The keys must be in lower case for the
350 authentication to work. The values must not contain
351 embedded newlines or carriage returns. This is primarily
352 useful for adding x-amz-* headers. Since authentication
353 is calculated by S3::REST, do not add that header here.
354 Since content-type gets its own key, also do not add that
355 header here.
356
357 ?inbody contentstring?
358 This optional item, if provided, gives the content that
359 will be sent. It is sent with a tranfer encoding of
360 binary, and only the low bytes are used, so use [encoding
361 convertto utf-8] if the string is a utf-8 string. This is
362 written all in one blast, so if you are using non-block‐
363 ing mode and the inbody is especially large, you may wind
364 up blocking on the write socket.
365
366 ?infile filename?
367 This optional item, if provided, and if inbody is not
368 provided, names the file from which the body of the HTTP
369 message will be constructed. The file is opened for read‐
370 ing and sent progressively by [fcopy], so it should not
371 block in non-blocking mode even if the file is very
372 large. The file is transfered in binary mode, so the
373 bytes on your disk will match the bytes in your resource.
374 Due to HTTP restrictions, it must be possible to use
375 [file size] on this file to determine the size at the
376 start of the transaction.
377
378 ?S3chan channel?
379 This optional item, if provided, indicates the already-
380 open socket over which the transaction should be con‐
381 ducted. If not provided, a connection is made to the ser‐
382 vice access point specified via S3::Configure, which is
383 normally s3.amazonaws.com. If this is provided, the chan‐
384 nel is not closed at the end of the transaction.
385
386 ?outchan channel?
387 This optional item, if provided, indicates the already-
388 open channel to which the body returned from S3 should be
389 written. That is, to retrieve a large resource, open a
390 file, set the translation mode, and pass the channel as
391 the value of the key outchan. Output will be written to
392 the channel in pieces so memory does not fill up unneces‐
393 sarily. The channel is not closed at the end of the
394 transaction.
395
396 ?resultvar varname?
397 This optional item, if provided, indicates that S3::REST
398 should run in non-blocking mode. The varname should be
399 fully qualified with respect to namespaces and cannot be
400 local to a proc. If provided, the result of the S3::REST
401 call is assigned to this variable once everything has
402 completed; use trace or vwait to know when this has hap‐
403 pened. If this key is not provided, the result is simply
404 returned from the call to S3::REST and no calls to the
405 eventloop are invoked from within this call.
406
407 ?throwsocket throw|return?
408 This optional item, if provided, indicates that S3::REST
409 should throw an error if throwmode is throw and a socket
410 error is encountered. It indicates that S3::REST should
411 return the error code in the returned dictionary if a
412 socket error is encountered and this is set to return. If
413 throwsocket is set to return or if the call is not block‐
414 ing, then a socket error (i.e., an error whose error code
415 starts with "S3 socket" will be returned in the dictio‐
416 nary as error, errorInfo, and errorCode. If a foreground
417 call is made (i.e., resultvar is not provided), and this
418 option is not provided or is set to throw, then error
419 will be invoked instead.
420
421 Once the call to S3::REST completes, a new dict is returned, either in
422 the resultvar or as the result of execution. This dict is a copy of the
423 original dict with the results added as new keys. The possible new keys
424 are these:
425
426 error errorstring
427
428 errorInfo errorstring
429
430 errorCode errorstring
431 If an error is caught, these three keys will be set in
432 the result. Note that S3::REST does not consider a
433 non-2XX HTTP return code as an error. The errorCode value
434 will be formatted according to the ERROR REPORTING
435 description. If these are present, other keys described
436 here might not be.
437
438 httpstatus threedigits
439 The three-digit code from the HTTP transaction. 2XX for
440 good, 5XX for server error, etc.
441
442 httpmessage text
443 The textual result after the status code. "OK" or "For‐
444 bidden" or etc.
445
446 outbody contentstring
447 If outchan was not specified, this key will hold a refer‐
448 ence to the (unencoded) contents of the body returned.
449 If Amazon returned an error (a la the httpstatus not a
450 2XX value), the error message will be in outbody or writ‐
451 ten to outchan as appropriate.
452
453 outheaders dict
454 This contains a dictionary of headers returned by Amazon.
455 The keys are always lower case. It's mainly useful for
456 finding the x-amz-meta-* headers, if any, although things
457 like last-modified and content-type are also useful. The
458 keys of this dictionary are always lower case. Both keys
459 and values are trimmed of extraneous whitespace.
460
462 The routines in this section all make use of one or more calls to
463 S3::REST to do their work, then parse and manage the data in a conve‐
464 nient way. All these commands throw errors as described in ERROR
465 REPORTING unless otherwise noted.
466
467 In all these commands, all arguments are presented as name/value pairs,
468 in any order. All the argument names start with a hyphen.
469
470 There are a few options that are common to many of the commands, and
471 those common options are documented here.
472
473 -blocking boolean
474 If provided and specified as false, then any calls to S3:REST
475 will be non-blocking, and internally these routines will call
476 [vwait] to get the results. In other words, these routines will
477 return the same value, but they'll have event loops running
478 while waiting for Amazon.
479
480 -parse-xml xmlstring
481 If provided, the routine skips actually communicating with Ama‐
482 zon, and instead behaves as if the XML string provided was
483 returned as the body of the call. Since several of these rou‐
484 tines allow the return of data in various formats, this argument
485 can be used to parse existing XML to extract the bits of infor‐
486 mation that are needed. It's also helpful for testing.
487
488 -bucket bucketname
489 Almost every high-level command needs to know what bucket the
490 resources are in. This option specifies that. (Only the command
491 to list available buckets does not require this parameter.)
492 This does not need to be URL-encoded, even if it contains spe‐
493 cial or non-ASCII characters. May or may not contain leading or
494 trailing spaces - commands normalize the bucket. If this is not
495 supplied, the value is taken from S3::Configure -default-bucket
496 if that string isn't empty. Note that spaces and slashes are
497 always trimmed from both ends and the rest must leave a valid
498 bucket.
499
500 -resource resourcename
501 This specifies the resource of interest within the bucket. It
502 may or may not start with a slash - both cases are handled.
503 This does not need to be URL-encoded, even if it contains spe‐
504 cial or non-ASCII characters.
505
506 -compare always|never|exists|missing|newer|date|checksum|different
507 When commands copy resources to files or files to resources, the
508 caller may specify that the copy should be skipped if the con‐
509 tents are the same. This argument specifies the conditions under
510 which the files should be copied. If it is not passed, the
511 result of S3::Configure -default-compare is used, which in turn
512 defaults to "always." The meanings of the various values are
513 these:
514
515 always Always copy the data. This is the default.
516
517 never Never copy the data. This is essentially a no-op, except
518 in S3::Push and S3::Pull where the -delete flag might
519 make a difference.
520
521 exists Copy the data only if the destination already exists.
522
523 missing
524 Copy the data only if the destination does not already
525 exist.
526
527 newer Copy the data if the destination is missing, or if the
528 date on the source is newer than the date on the destina‐
529 tion by at least S3::Configure -slop-seconds seconds. If
530 the source is Amazon, the date is taken from the Last-
531 Modified header. If the source is local, it is taken as
532 the mtime of the file. If the source data is specified in
533 a string rather than a file, it is taken as right now,
534 via [clock seconds].
535
536 date Like newer, except copy if the date is newer or older.
537
538 checksum
539 Calculate the MD5 checksum on the local file or string,
540 ask Amazon for the eTag of the resource, and copy the
541 data if they're different. Copy the data also if the des‐
542 tination is missing. Note that this can be slow with
543 large local files unless the C version of the MD5 support
544 is available.
545
546 different
547 Copy the data if the destination does not exist. If the
548 destination exists and an actual file name was specified
549 (rather than a content string), and the date on the file
550 differs from the date on the resource, copy the data. If
551 the data is provided as a content string, the "date" is
552 treated as "right now", so it will likely always differ
553 unless slop-seconds is large. If the dates are the same,
554 the MD5 checksums are compared, and the data is copied if
555 the checksums differ.
556
557 Note that "newer" and "date" don't care about the contents, and "check‐
558 sum" doesn't care about the dates, but "different" checks both.
559
560 S3::ListAllMyBuckets ?-blocking boolean? ?-parse-xml xmlstring?
561 ?-result-type REST|xml|pxml|dict|names|owner?
562 This routine performs a GET on the Amazon S3 service, which is
563 defined to return a list of buckets owned by the account identi‐
564 fied by the authorization header. (Blame Amazon for the dumb
565 names.)
566
567 -blocking boolean
568 See above for standard definition.
569
570 -parse-xml xmlstring
571 See above for standard definition.
572
573 -result-type REST
574 The dictionary returned by S3::REST is the return value
575 of S3::ListAllMyBuckets. In this case, a non-2XX httpsta‐
576 tus will not throw an error. You may not combine this
577 with -parse-xml.
578
579 -result-type xml
580 The raw XML of the body is returned as the result (with
581 no encoding applied).
582
583 -result-type pxml
584 The XML of the body as parsed by xsxp::parse is returned.
585
586 -result-type dict
587 A dictionary of interesting portions of the XML is
588 returned. The dictionary contains the following keys:
589
590 Owner/ID
591 The Amazon AWS ID (in hex) of the owner of the
592 bucket.
593
594 Owner/DisplayName
595 The Amazon AWS ID's Display Name.
596
597 Bucket/Name
598 A list of names, one for each bucket.
599
600 Bucket/CreationDate
601 A list of dates, one for each bucket, in the same
602 order as Bucket/Name, in ISO format (as returned
603 by Amazon).
604
605
606 -result-type names
607 A list of bucket names is returned with all other infor‐
608 mation stripped out. This is the default result type for
609 this command.
610
611 -result-type owner
612 A list containing two elements is returned. The first
613 element is the owner's ID, and the second is the owner's
614 display name.
615
616
617 S3::PutBucket ?-bucket bucketname? ?-blocking boolean? ?-acl {}|pri‐
618 vate|public-read|public-read-write|authenticated-read?
619 This command creates a bucket if it does not already exist.
620 Bucket names are globally unique, so you may get a "Forbidden"
621 error from Amazon even if you cannot see the bucket in
622 S3::ListAllMyBuckets. See S3::SuggestBucket for ways to minimize
623 this risk. The x-amz-acl header comes from the -acl option, or
624 from S3::Configure -default-acl if not specified.
625
626 S3::DeleteBucket ?-bucket bucketname? ?-blocking boolean?
627 This command deletes a bucket if it is empty and you have such
628 permission. Note that Amazon's list of buckets is a global
629 resource, requiring far-flung synchronization. If you delete a
630 bucket, it may be quite a few minutes (or hours) before you can
631 recreate it, yielding "Conflict" errors until then.
632
633 S3::GetBucket ?-bucket bucketname? ?-blocking boolean? ?-parse-xml xml‐
634 string? ?-max-count integer? ?-prefix prefixstring? ?-delimiter delim‐
635 iterstring? ?-result-type REST|xml|pxml|names|dict?
636 This lists the contents of a bucket. That is, it returns a
637 directory listing of resources within a bucket, rather than
638 transfering any user data.
639
640 -bucket bucketname
641 The standard bucket argument.
642
643 -blocking boolean
644 The standard blocking argument.
645
646 -parse-xml xmlstring
647 The standard parse-xml argument.
648
649 -max-count integer
650 If supplied, this is the most number of records to be
651 returned. If not supplied, the code will iterate until
652 all records have been found. Not compatible with -parse-
653 xml. Note that if this is supplied, only one call to
654 S3::REST will be made. Otherwise, enough calls will be
655 made to exhaust the listing, buffering results in memory,
656 so take care if you may have huge buckets.
657
658 -prefix prefixstring
659 If present, restricts listing to resources with a partic‐
660 ular prefix. One leading / is stripped if present.
661
662 -delimiter delimiterstring
663 If present, specifies a delimiter for the listing. The
664 presence of this will summarize multiple resources into
665 one entry, as if S3 supported directories. See the Amazon
666 documentation for details.
667
668 -result-type REST|xml|pxml|names|dict
669 This indicates the format of the return result of the
670 command.
671
672 REST If -max-count is specified, the dictionary
673 returned from S3::REST is returned. If -max-count
674 is not specified, a list of all the dictionaries
675 returned from the one or more calls to S3::REST is
676 returned.
677
678 xml If -max-count is specified, the body returned from
679 S3::REST is returned. If -max-count is not speci‐
680 fied, a list of all the bodies returned from the
681 one or more calls to S3::REST is returned.
682
683 pxml If -max-count is specified, the body returned from
684 S3::REST is passed throught xsxp::parse and then
685 returned. If -max-count is not specified, a list
686 of all the bodies returned from the one or more
687 calls to S3::REST are each passed through
688 xsxp::parse and then returned.
689
690 names Returns a list of all names found in either the
691 Contents/Key fields or the CommonPrefixes/Prefix
692 fields. If no -delimiter is specified and no -max-
693 count is specified, this returns a list of all
694 resources with the specified -prefix.
695
696 dict Returns a dictionary. (Returns only one dictionary
697 even if -max-count wasn't specified.) The keys of
698 the dictionary are as follows:
699
700 Name The name of the bucket (from the final call
701 to S3::REST).
702
703 Prefix From the final call to S3::REST.
704
705 Marker From the final call to S3::REST.
706
707 MaxKeys
708 From the final call to S3::REST.
709
710 IsTruncated
711 From the final call to S3::REST, so always
712 false if -max-count is not specified.
713
714 NextMarker
715 Always provided if IsTruncated is true, and
716 calculated of Amazon does not provide it.
717 May be empty if IsTruncated is false.
718
719 Key A list of names of resources in the bucket
720 matching the -prefix and -delimiter
721 restrictions.
722
723 LastModified
724 A list of times of resources in the bucket,
725 in the same order as Key, in the format
726 returned by Amazon. (I.e., it is not parsed
727 into a seconds-from-epoch.)
728
729 ETag A list of entity tags (a.k.a. MD5 check‐
730 sums) in the same order as Key.
731
732 Size A list of sizes in bytes of the resources,
733 in the same order as Key.
734
735 Owner/ID
736 A list of owners of the resources in the
737 bucket, in the same order as Key.
738
739 Owner/DisplayName
740 A list of owners of the resources in the
741 bucket, in the same order as Key. These are
742 the display names.
743
744 CommonPrefixes/Prefix
745 A list of prefixes common to multiple enti‐
746 ties. This is present only if -delimiter
747 was supplied.
748
749 S3::Put ?-bucket bucketname? -resource resourcename ?-blocking boolean?
750 ?-file filename? ?-content contentstring? ?-acl private|public-
751 read|public-read-write|authenticated-read|calc|keep? ?-content-type
752 contenttypestring? ?-x-amz-meta-* metadatatext? ?-compare comparemode?
753 This command sends data to a resource on Amazon's servers for
754 storage, using the HTTP PUT command. It returns 0 if the -com‐
755 pare mode prevented the transfer, 1 if the transfer worked, or
756 throws an error if the transfer was attempted but failed.
757 Server 5XX errors and S3 socket errors are retried according to
758 S3:Configure -retries settings before throwing an error; other
759 errors throw immediately.
760
761 -bucket
762 This specifies the bucket into which the resource will be
763 written. Leading and/or trailing slashes are removed for
764 you, as are spaces.
765
766 -resource
767 This is the full name of the resource within the bucket.
768 A single leading slash is removed, but not a trailing
769 slash. Spaces are not trimmed.
770
771 -blocking
772 The standard blocking flag.
773
774 -file If this is specified, the filename must exist, must be
775 readable, and must not be a special or directory file.
776 [file size] must apply to it and must not change for the
777 lifetime of the call. The default content-type is calcu‐
778 lated based on the name and/or contents of the file.
779 Specifying this is an error if -content is also speci‐
780 fied, but at least one of -file or -content must be spec‐
781 ified. (The file is allowed to not exist or not be read‐
782 able if -compare never is specified.)
783
784 -content
785 If this is specified, the contentstring is sent as the
786 body of the resource. The content-type defaults to
787 "application/octet-string". Only the low bytes are sent,
788 so non-ASCII should use the appropriate encoding (such as
789 [encoding convertto utf-8]) before passing it to this
790 routine, if necessary. Specifying this is an error if
791 -file is also specified, but at least one of -file or
792 -content must be specified.
793
794 -acl This defaults to S3::Configure -default-acl if not speci‐
795 fied. It sets the x-amz-acl header on the PUT operation.
796 If the value provided is calc, the x-amz-acl header is
797 calculated based on the I/O permissions of the file to be
798 uploaded; it is an error to specify calc and -content.
799 If the value provided is keep, the acl of the resource is
800 read before the PUT (or the default is used if the
801 resource does not exist), then set back to what it was
802 after the PUT (if it existed). An error will occur if the
803 resource is successfully written but the kept ACL cannot
804 be then applied. This should never happen. Note: calc
805 is not currently fully implemented.
806
807 -x-amz-meta-*
808 If any header starts with "-x-amz-meta-", its contents
809 are added to the PUT command to be stored as metadata
810 with the resource. Again, no encoding is performed, and
811 the metadata should not contain characters like newlines,
812 carriage returns, and so on. It is best to stick with
813 simple ASCII strings, or to fix the library in several
814 places.
815
816 -content-type
817 This overrides the content-type calculated by -file or
818 sets the content-type for -content.
819
820 -compare
821 This is the standard compare mode argument. S3::Put
822 returns 1 if the data was copied or 0 if the data was
823 skipped due to the comparison mode so indicating it
824 should be skipped.
825
826
827 S3::Get ?-bucket bucketname? -resource resourcename ?-blocking boolean?
828 ?-compare comparemode? ?-file filename? ?-content contentvarname?
829 ?-timestamp aws|now? ?-headers headervarname?
830 This command retrieves data from a resource on Amazon's S3
831 servers, using the HTTP GET command. It returns 0 if the -com‐
832 pare mode prevented the transfer, 1 if the transfer worked, or
833 throws an error if the transfer was attempted but failed. Server
834 5XX errors and S3 socket errors are are retried according to
835 S3:Configure settings before throwing an error; other errors
836 throw immediately. Note that this is always authenticated as the
837 user configured in via S3::Configure -accesskeyid. Use the
838 Tcllib http for unauthenticated GETs.
839
840 -bucket
841 This specifies the bucket from which the resource will be
842 read. Leading and/or trailing slashes are removed for
843 you, as are spaces.
844
845 -resource
846 This is the full name of the resource within the bucket.
847 A single leading slash is removed, but not a trailing
848 slash. Spaces are not trimmed.
849
850 -blocking
851 The standard blocking flag.
852
853 -file If this is specified, the body of the resource will be
854 read into this file, incrementally without pulling it
855 entirely into memory first. The parent directory must
856 already exist. If the file already exists, it must be
857 writable. If an error is thrown part-way through the
858 process and the file already existed, it may be clob‐
859 bered. If an error is thrown part-way through the process
860 and the file did not already exist, any partial bits will
861 be deleted. Specifying this is an error if -content is
862 also specified, but at least one of -file or -content
863 must be specified.
864
865 -timestamp
866 This is only valid in conjunction with -file. It may be
867 specified as now or aws. The default is now. If now, the
868 file's modification date is left up to the system. If
869 aws, the file's mtime is set to match the Last-Modified
870 header on the resource, synchronizing the two appropri‐
871 ately for -compare date or -compare newer.
872
873 -content
874 If this is specified, the contentvarname is a variable in
875 the caller's scope (not necessarily global) that receives
876 the value of the body of the resource. No encoding is
877 done, so if the resource (for example) represents a UTF-8
878 byte sequence, use [encoding convertfrom utf-8] to get a
879 valid UTF-8 string. If this is specified, the -compare is
880 ignored unless it is never, in which case no assignment
881 to contentvarname is performed. Specifying this is an
882 error if -file is also specified, but at least one of
883 -file or -content must be specified.
884
885 -compare
886 This is the standard compare mode argument. S3::Get
887 returns 1 if the data was copied or 0 if the data was
888 skipped due to the comparison mode so indicating it
889 should be skipped.
890
891 -headers
892 If this is specified, the headers resulting from the
893 fetch are stored in the provided variable, as a dictio‐
894 nary. This will include content-type and x-amz-meta-*
895 headers, as well as the usual HTTP headers, the x-amz-id
896 debugging headers, and so on. If no file is fetched (due
897 to -compare or other errors), no assignment to this vari‐
898 able is performed.
899
900
901 S3::Head ?-bucket bucketname? -resource resourcename ?-blocking bool‐
902 ean? ?-dict dictvarname? ?-headers headersvarname? ?-status statusvar‐
903 name?
904 This command requests HEAD from the resource. It returns
905 whether a 2XX code was returned as a result of the request,
906 never throwing an S3 remote error. That is, if this returns 1,
907 the resource exists and is accessible. If this returns 0, some‐
908 thing went wrong, and the -status result can be consulted for
909 details.
910
911 -bucket
912 This specifies the bucket from which the resource will be
913 read. Leading and/or trailing slashes are removed for
914 you, as are spaces.
915
916 -resource
917 This is the full name of the resource within the bucket.
918 A single leading slash is removed, but not a trailing
919 slash. Spaces are not trimmed.
920
921 -blocking
922 The standard blocking flag.
923
924 -dict If specified, the resulting dictionary from the S3::REST
925 call is assigned to the indicated (not necessarily
926 global) variable in the caller's scope.
927
928 -headers
929 If specified, the dictionary of headers from the result
930 are assigned to the indicated (not necessarily global)
931 variable in the caller's scope.
932
933 -status
934 If specified, the indicated (not necessarily global)
935 variable in the caller's scope is assigned a 2-element
936 list. The first element is the 3-digit HTTP status code,
937 while the second element is the HTTP message (such as
938 "OK" or "Forbidden").
939
940 S3::GetAcl ?-blocking boolean? ?-bucket bucketname? -resource resource‐
941 name ?-result-type REST|xml|pxml?
942 This command gets the ACL of the indicated resource or throws an
943 error if it is unavailable.
944
945 -blocking boolean
946 See above for standard definition.
947
948 -bucket
949 This specifies the bucket from which the resource will be
950 read. Leading and/or trailing slashes are removed for
951 you, as are spaces.
952
953 -resource
954 This is the full name of the resource within the bucket.
955 A single leading slash is removed, but not a trailing
956 slash. Spaces are not trimmed.
957
958 -parse-xml xml
959 The XML from a previous GetACL can be passed in to be
960 parsed into dictionary form. In this case, -result-type
961 must be pxml or dict.
962
963 -result-type REST
964 The dictionary returned by S3::REST is the return value
965 of S3::GetAcl. In this case, a non-2XX httpstatus will
966 not throw an error.
967
968 -result-type xml
969 The raw XML of the body is returned as the result (with
970 no encoding applied).
971
972 -result-type pxml
973 The XML of the body as parsed by xsxp::parse is returned.
974
975 -result-type dict
976 This fetches the ACL, parses it, and returns a dictionary
977 of two elements.
978
979 The first element has the key "owner" whose value is the
980 canonical ID of the owner of the resource.
981
982 The second element has the key "acl" whose value is a
983 dictionary. Each key in the dictionary is one of Ama‐
984 zon's permissions, namely "READ", "WRITE", "READ_ACP",
985 "WRITE_ACP", or "FULL_CONTROL". Each value of each key
986 is a list of canonical IDs or group URLs that have that
987 permission. Elements are not in the list in any particu‐
988 lar order, and not all keys are necessarily present.
989 Display names are not returned, as they are not espe‐
990 cially useful; use pxml to obtain them if necessary.
991
992 S3::PutAcl ?-blocking boolean? ?-bucket bucketname? -resource resource‐
993 name ?-acl new-acl?
994 This sets the ACL on the indicated resource. It returns the XML
995 written to the ACL, or throws an error if anything went wrong.
996
997 -blocking boolean
998 See above for standard definition.
999
1000 -bucket
1001 This specifies the bucket from which the resource will be
1002 read. Leading and/or trailing slashes are removed for
1003 you, as are spaces.
1004
1005 -resource
1006 This is the full name of the resource within the bucket.
1007 A single leading slash is removed, but not a trailing
1008 slash. Spaces are not trimmed.
1009
1010 -owner If this is provided, it is assumed to match the owner of
1011 the resource. Otherwise, a GET may need to be issued
1012 against the resource to find the owner. If you already
1013 have the owner (such as from a call to S3::GetAcl, you
1014 can pass the value of the "owner" key as the value of
1015 this option, and it will be used in the construction of
1016 the XML.
1017
1018 -acl If this option is specified, it provides the ACL the
1019 caller wishes to write to the resource. If this is not
1020 supplied or is empty, the value is taken from S3::Config‐
1021 ure -default-acl. The ACL is written with a PUT to the
1022 ?acl resource.
1023
1024 If the value passed to this option starts with "<", it is
1025 taken to be a body to be PUT to the ACL resource.
1026
1027 If the value matches one of the standard Amazon x-amz-acl
1028 headers (i.e., a canned access policy), that header is
1029 translated to XML and then applied. The canned access
1030 policies are private, public-read, public-read-write, and
1031 authenticated-read (in lower case).
1032
1033 Otherwise, the value is assumed to be a dictionary for‐
1034 matted as the "acl" sub-entry within the dict returns by
1035 S3::GetAcl -result-type dict. The proper XML is gener‐
1036 ated and applied to the resource. Note that a value con‐
1037 taining "//" is assumed to be a group, a value containing
1038 "@" is assumed to be an AmazonCustomerByEmail, and other‐
1039 wise the value is assumed to be a canonical Amazon ID.
1040
1041 Note that you cannot change the owner, so calling GetAcl
1042 on a resource owned by one user and applying it via
1043 PutAcl on a resource owned by another user may not do
1044 exactly what you expect.
1045
1046 S3::Delete ?-bucket bucketname? -resource resourcename ?-blocking bool‐
1047 ean? ?-status statusvar?
1048 This command deletes the specified resource from the specified
1049 bucket. It returns 1 if the resource was deleted successfully,
1050 0 otherwise. It returns 0 rather than throwing an S3 remote
1051 error.
1052
1053 -bucket
1054 This specifies the bucket from which the resource will be
1055 deleted. Leading and/or trailing slashes are removed for
1056 you, as are spaces.
1057
1058 -resource
1059 This is the full name of the resource within the bucket.
1060 A single leading slash is removed, but not a trailing
1061 slash. Spaces are not trimmed.
1062
1063 -blocking
1064 The standard blocking flag.
1065
1066 -status
1067 If specified, the indicated (not necessarily global)
1068 variable in the caller's scope is set to a two-element
1069 list. The first element is the 3-digit HTTP status code.
1070 The second element is the HTTP message (such as "OK" or
1071 "Forbidden"). Note that Amazon's DELETE result is 204 on
1072 success, that being the code indicating no content in the
1073 returned body.
1074
1075
1076 S3::Push ?-bucket bucketname? -directory directoryname ?-prefix pre‐
1077 fixstring? ?-compare comparemode? ?-x-amz-meta-* metastring? ?-acl
1078 aclcode? ?-delete boolean? ?-error throw|break|continue? ?-progress
1079 scriptprefix?
1080 This synchronises a local directory with a remote bucket by
1081 pushing the differences using S3::Put. Note that if something
1082 has changed in the bucket but not locally, those changes could
1083 be lost. Thus, this is not a general two-way synchronization
1084 primitive. (See S3::Sync for that.) Note too that resource names
1085 are case sensitive, so changing the case of a file on a Windows
1086 machine may lead to otherwise-unnecessary transfers. Note that
1087 only regular files are considered, so devices, pipes, symlinks,
1088 and directories are not copied.
1089
1090 -bucket
1091 This names the bucket into which data will be pushed.
1092
1093 -directory
1094 This names the local directory from which files will be
1095 taken. It must exist, be readable via [glob] and so on.
1096 If only some of the files therein are readable, S3::Push
1097 will PUT those files that are readable and return in its
1098 results the list of files that could not be opened.
1099
1100 -prefix
1101 This names the prefix that will be added to all
1102 resources. That is, it is the remote equivalent of
1103 -directory. If it is not specified, the root of the
1104 bucket will be treated as the remote directory. An exam‐
1105 ple may clarify.
1106
1107
1108 S3::Push -bucket test -directory /tmp/xyz -prefix hello/world
1109
1110
1111 In this example, /tmp/xyz/pdq.html will be stored as
1112 http://s3.amazonaws.com/test/hello/world/pdq.html in Ama‐
1113 zon's servers. Also, /tmp/xyz/abc/def/Hello will be
1114 stored as http://s3.amazon‐
1115 aws.com/test/hello/world/abc/def/Hello in Amazon's
1116 servers. Without the -prefix option, /tmp/xyz/pdq.html
1117 would be stored as http://s3.amazonaws.com/test/pdq.html.
1118
1119 -blocking
1120 This is the standard blocking option.
1121
1122 -compare
1123 If present, this is passed to each invocation of S3::Put.
1124 Naturally, S3::Configure -default-compare is used if this
1125 is not specified.
1126
1127 -x-amz-meta-*
1128 If present, this is passed to each invocation of S3::Put.
1129 All copied files will have the same metadata.
1130
1131 -acl If present, this is passed to each invocation of S3::Put.
1132
1133 -delete
1134 This defaults to false. If true, resources in the desti‐
1135 nation that are not in the source directory are deleted
1136 with S3::Delete. Since only regular files are consid‐
1137 ered, the existance of a symlink, pipe, device, or direc‐
1138 tory in the local source will not prevent the deletion of
1139 a remote resource with a corresponding name.
1140
1141 -error This controls the behavior of S3::Push in the event that
1142 S3::Put throws an error. Note that errors encountered on
1143 the local file system or in reading the list of resources
1144 in the remote bucket always throw errors. This option
1145 allows control over "partial" errors, when some files
1146 were copied and some were not. S3::Delete is always fin‐
1147 ished up, with errors simply recorded in the return
1148 result.
1149
1150 throw The error is rethrown with the same errorCode.
1151
1152 break Processing stops without throwing an error, the
1153 error is recorded in the return value, and the
1154 command returns with a normal return. The calls
1155 to S3::Delete are not started.
1156
1157 continue
1158 This is the default. Processing continues without
1159 throwing, recording the error in the return
1160 result, and resuming with the next file in the
1161 local directory to be copied.
1162
1163 -progress
1164 If this is specified and the indicated script prefix is
1165 not empty, the indicated script prefix will be invoked
1166 several times in the caller's context with additional
1167 arguments at various points in the processing. This
1168 allows progress reporting without backgrounding. The
1169 provided prefix will be invoked with additional argu‐
1170 ments, with the first additional argument indicating what
1171 part of the process is being reported on. The prefix is
1172 initially invoked with args as the first additional argu‐
1173 ment and a dictionary representing the normalized argu‐
1174 ments to the S3::Push call as the second additional argu‐
1175 ment. Then the prefix is invoked with local as the first
1176 additional argument and a list of suffixes of the files
1177 to be considered as the second argument. Then the prefix
1178 is invoked with remote as the first additional argument
1179 and a list of suffixes existing in the remote bucket as
1180 the second additional argument. Then, for each file in
1181 the local list, the prefix will be invoked with start as
1182 the first additional argument and the common suffix as
1183 the second additional argument. When S3::Put returns for
1184 that file, the prefix will be invoked with copy as the
1185 first additional argument, the common suffix as the sec‐
1186 ond additional argument, and a third argument that will
1187 be "copied" (if S3::Put sent the resource), "skipped" (if
1188 S3::Put decided not to based on -compare), or the error‐
1189 Code that S3::Put threw due to unexpected errors (in
1190 which case the third argument is a list that starts with
1191 "S3"). When all files have been transfered, the prefix
1192 may be invoked zero or more times with delete as the
1193 first additional argument and the suffix of the resource
1194 being deleted as the second additional argument, with a
1195 third argument being either an empty string (if the
1196 delete worked) or the errorCode from S3::Delete if it
1197 failed. Finally, the prefix will be invoked with finished
1198 as the first additional argument and the return value as
1199 the second additional argument.
1200
1201 The return result from this command is a dictionary. They keys
1202 are the suffixes (i.e., the common portion of the path after the
1203 -directory and -prefix), while the values are either "copied",
1204 "skipped" (if -compare indicated not to copy the file), or the
1205 errorCode thrown by S3::Put, as appropriate. If -delete was
1206 true, there may also be entries for suffixes with the value
1207 "deleted" or "notdeleted", indicating whether the attempted
1208 S3::Delete worked or not, respectively. There is one additional
1209 pair in the return result, whose key is the empty string and
1210 whose value is a nested dictionary. The keys of this nested
1211 dictionary include "filescopied" (the number of files success‐
1212 fully copied), "bytescopied" (the number of data bytes in the
1213 files copied, excluding headers, metadata, etc), "com‐
1214 pareskipped" (the number of files not copied due to -compare
1215 mode), "errorskipped" (the number of files not copied due to
1216 thrown errors), "filesdeleted" (the number of resources deleted
1217 due to not having corresponding files locally, or 0 if -delete
1218 is false), and "filesnotdeleted" (the number of resources whose
1219 deletion was attempted but failed).
1220
1221 Note that this is currently implemented somewhat inefficiently.
1222 It fetches the bucket listing (including timestamps and eTags),
1223 then calls S3::Put, which uses HEAD to find the timestamps and
1224 eTags again. Correcting this with no API change is planned for a
1225 future upgrade.
1226
1227
1228 S3::Pull ?-bucket bucketname? -directory directoryname ?-prefix pre‐
1229 fixstring? ?-blocking boolean? ?-compare comparemode? ?-delete boolean?
1230 ?-timestamp aws|now? ?-error throw|break|continue? ?-progress script‐
1231 prefix?
1232 This synchronises a remote bucket with a local directory by
1233 pulling the differences using S3::Get If something has been
1234 changed locally but not in the bucket, those difference may be
1235 lost. This is not a general two-way synchronization mechanism.
1236 (See S3::Sync for that.) This creates directories if needed;
1237 new directories are created with default permissions. Note that
1238 resource names are case sensitive, so changing the case of a
1239 file on a Windows machine may lead to otherwise-unnecessary
1240 transfers. Also, try not to store data in resources that end
1241 with a slash, or which are prefixes of resources that otherwise
1242 would start with a slash; i.e., don't use this if you store data
1243 in resources whose names have to be directories locally.
1244
1245 Note that this is currently implemented somewhat inefficiently.
1246 It fetches the bucket listing (including timestamps and eTags),
1247 then calls S3::Get, which uses HEAD to find the timestamps and
1248 eTags again. Correcting this with no API change is planned for a
1249 future upgrade.
1250
1251 -bucket
1252 This names the bucket from which data will be pulled.
1253
1254 -directory
1255 This names the local directory into which files will be
1256 written It must exist, be readable via [glob], writable
1257 for file creation, and so on. If only some of the files
1258 therein are writable, S3::Pull will GET those files that
1259 are writable and return in its results the list of files
1260 that could not be opened.
1261
1262 -prefix
1263 The prefix of resources that will be considered for
1264 retrieval. See S3::Push for more details, examples, etc.
1265 (Of course, S3::Pull reads rather than writes, but the
1266 prefix is treated similarly.)
1267
1268 -blocking
1269 This is the standard blocking option.
1270
1271 -compare
1272 This is passed to each invocation of S3::Get if provided.
1273 Naturally, S3::Configure -default-compare is used if this
1274 is not provided.
1275
1276 -timestamp
1277 This is passed to each invocation of S3::Get if provided.
1278
1279 -delete
1280 If this is specified and true, files that exist in the
1281 -directory that are not in the -prefix will be deleted
1282 after all resources have been copied. In addition, empty
1283 directories (other than the top-level -directory) will be
1284 deleted, as Amazon S3 has no concept of an empty direc‐
1285 tory.
1286
1287 -error See S3::Push for a description of this option.
1288
1289 -progress
1290 See S3::Push for a description of this option. It dif‐
1291 fers slightly in that local directories may be included
1292 with a trailing slash to indicate they are directories.
1293
1294 The return value from this command is a dictionary. It is iden‐
1295 tical in form and meaning to the description of the return
1296 result of S3::Push. It differs only in that directories may be
1297 included, with a trailing slash in their name, if they are empty
1298 and get deleted.
1299
1300 S3::Toss ?-bucket bucketname? -prefix prefixstring ?-blocking boolean?
1301 ?-error throw|break|continue? ?-progress scriptprefix?
1302 This deletes some or all resources within a bucket. It would be
1303 considered a "recursive delete" had Amazon implemented actual
1304 directories.
1305
1306 -bucket
1307 The bucket from which resources will be deleted.
1308
1309 -blocking
1310 The standard blocking option.
1311
1312 -prefix
1313 The prefix for resources to be deleted. Any resource that
1314 starts with this string will be deleted. This is
1315 required. To delete everything in the bucket, pass an
1316 empty string for the prefix.
1317
1318 -error If this is "throw", S3::Toss rethrows any errors it
1319 encounters. If this is "break", S3::Toss returns with a
1320 normal return after the first error, recording that error
1321 in the return result. If this is "continue", which is the
1322 default, S3::Toss continues on and lists all errors in
1323 the return result.
1324
1325 -progress
1326 If this is specified and not an empty string, the script
1327 prefix will be invoked several times in the context of
1328 the caller with additional arguments appended. Ini‐
1329 tially, it will be invoked with the first additional
1330 argument being args and the second being the processed
1331 list of arguments to S3::Toss. Then it is invoked with
1332 remote as the first additional argument and the list of
1333 suffixes in the bucket to be deleted as the second addi‐
1334 tional argument. Then it is invoked with the first addi‐
1335 tional argument being delete and the second additional
1336 argument being the suffix deleted and the third addi‐
1337 tional argument being "deleted" or "notdeleted" depending
1338 on whether S3::Delete threw an error. Finally, the
1339 script prefix is invoked with a first additional argument
1340 of "finished" and a second additional argument of the
1341 return value.
1342
1343 The return value is a dictionary. The keys are the suffixes of
1344 files that S3::Toss attempted to delete, and whose values are
1345 either the string "deleted" or "notdeleted". There is also one
1346 additional pair, whose key is the empty string and whose value
1347 is an embedded dictionary. The keys of this embedded dictionary
1348 include "filesdeleted" and "filesnotdeleted", each of which has
1349 integer values.
1350
1352 · The pure-Tcl MD5 checking is slow. If you are processing files
1353 in the megabyte range, consider ensuring binary support is
1354 available.
1355
1356 · The commands S3::Pull and S3::Push fetch a directory listing
1357 which includes timestamps and MD5 hashes, then invoke S3::Get
1358 and S3::Put. If a complex -compare mode is specified, S3::Get
1359 and S3::Put will invoke a HEAD operation for each file to fetch
1360 timestamps and MD5 hashes of each resource again. It is expected
1361 that a future release of this package will solve this without
1362 any API changes.
1363
1364 · The commands S3::Pull and S3::Push fetch a directory listing
1365 without using -max-count. The entire directory is pulled into
1366 memory at once. For very large buckets, this could be a perfor‐
1367 mance problem. The author, at this time, does not plan to change
1368 this behavior. Welcome to Open Source.
1369
1370 · S3::Sync is neither designed nor implemented yet. The intention
1371 would be to keep changes synchronised, so changes could be made
1372 to both the bucket and the local directory and be merged by
1373 S3::Sync.
1374
1375 · Nor is -compare calc fully implemented. This is primarily due to
1376 Windows not providing a convenient method for distinguishing
1377 between local files that are "public-read" or "public-read-
1378 write". Assistance figuring out TWAPI for this would be appreci‐
1379 ated. The U**X semantics are difficult to map directly as well.
1380 See the source for details. Note that there are not tests for
1381 calc, since it isn't done yet.
1382
1383 · The HTTP processing is implemented within the library, rather
1384 than using a "real" HTTP package. Hence, multi-line headers are
1385 not (yet) handled correctly. Do not include carriage returns or
1386 linefeeds in x-amz-meta-* headers, content-type values, and so
1387 on. The author does not at this time expect to improve this.
1388
1389 · Internally, S3::Push and S3::Pull and S3::Toss are all very sim‐
1390 ilar and should be refactored.
1391
1392 · The idea of using -compare never -delete true to delete files
1393 that have been deleted from one place but not the other yet not
1394 copying changed files is untested.
1395
1397 To fetch a "directory" out of a bucket, make changes, and store it
1398 back:
1399
1400
1401 file mkdir ./tempfiles
1402 S3::Pull -bucket sample -prefix of/interest -directory ./tempfiles \
1403 -timestamp aws
1404 do_my_process ./tempfiles other arguments
1405 S3::Push -bucket sample -prefix of/interest -directory ./tempfiles \
1406 -compare newer -delete true
1407
1408
1409 To delete files locally that were deleted off of S3 but not otherwise
1410 update files:
1411
1412
1413 S3::Pull -bucket sample -prefix of/interest -directory ./myfiles \
1414 -compare never -delete true
1415
1416
1418 The author intends to work on several additional projects related to
1419 this package, in addition to finishing the unfinished features.
1420
1421 First, a command-line program allowing browsing of buckets and transfer
1422 of files from shell scripts and command prompts is useful.
1423
1424 Second, a GUI-based program allowing visual manipulation of bucket and
1425 resource trees not unlike Windows Explorer would be useful.
1426
1427 Third, a command-line (and perhaps a GUI-based) program called "OddJob"
1428 that will use S3 to synchronize computation amongst multiple servers
1429 running OddJob. An S3 bucket will be set up with a number of scripts to
1430 run, and the OddJob program can be invoked on multiple machines to run
1431 scripts on all the machines, each moving on to the next unstarted task
1432 as it finishes each. This is still being designed, and it is intended
1433 primarily to be run on Amazon's Elastic Compute Cloud.
1434
1436 This package uses the TLS package to handle the security for https urls
1437 and other socket connections.
1438
1439 Policy decisions like the set of protocols to support and what ciphers
1440 to use are not the responsibility of TLS, nor of this package itself
1441 however. Such decisions are the responsibility of whichever applica‐
1442 tion is using the package, and are likely influenced by the set of
1443 servers the application will talk to as well.
1444
1445 For example, in light of the recent POODLE attack [http://googleonli‐
1446 nesecurity.blogspot.co.uk/2014/10/this-poodle-bites-exploiting-
1447 ssl-30.html] discovered by Google many servers will disable support for
1448 the SSLv3 protocol. To handle this change the applications using TLS
1449 must be patched, and not this package, nor TLS itself. Such a patch
1450 may be as simple as generally activating tls1 support, as shown in the
1451 example below.
1452
1453
1454 package require tls
1455 tls::init -tls1 1 ;# forcibly activate support for the TLS1 protocol
1456
1457 ... your own application code ...
1458
1459
1461 This document, and the package it describes, will undoubtedly contain
1462 bugs and other problems. Please report such in the category amazon-s3
1463 of the Tcllib Trackers [http://core.tcl.tk/tcllib/reportlist]. Please
1464 also report any ideas for enhancements you may have for either package
1465 and/or documentation.
1466
1467 When proposing code changes, please provide unified diffs, i.e the out‐
1468 put of diff -u.
1469
1470 Note further that attachments are strongly preferred over inlined
1471 patches. Attachments can be made by going to the Edit form of the
1472 ticket immediately after its creation, and then using the left-most
1473 button in the secondary navigation bar.
1474
1476 amazon, cloud, s3
1477
1479 Networking
1480
1482 2006,2008 Darren New. All Rights Reserved. See LICENSE.TXT for terms.
1483
1484
1485
1486
1487tcllib 1.0.3 S3(n)