1DBM::Deep(3)          User Contributed Perl Documentation         DBM::Deep(3)
2
3
4

NAME

6       DBM::Deep - A pure perl multi-level hash/array DBM that supports
7       transactions
8

VERSION

10       2.0016
11

SYNOPSIS

13         use DBM::Deep;
14         my $db = DBM::Deep->new( "foo.db" );
15
16         $db->{key} = 'value';
17         print $db->{key};
18
19         $db->put('key' => 'value');
20         print $db->get('key');
21
22         # true multi-level support
23         $db->{my_complex} = [
24             'hello', { perl => 'rules' },
25             42, 99,
26         ];
27
28         $db->begin_work;
29
30         # Do stuff here
31
32         $db->rollback;
33         $db->commit;
34
35         tie my %db, 'DBM::Deep', 'foo.db';
36         $db{key} = 'value';
37         print $db{key};
38
39         tied(%db)->put('key' => 'value');
40         print tied(%db)->get('key');
41

DESCRIPTION

43       A unique flat-file database module, written in pure perl. True multi-
44       level hash/array support (unlike MLDBM, which is faked), hybrid OO /
45       tie() interface, cross-platform FTPable files, ACID transactions, and
46       is quite fast.  Can handle millions of keys and unlimited levels
47       without significant slow-down. Written from the ground-up in pure perl
48       -- this is NOT a wrapper around a C-based DBM. Out-of-the-box
49       compatibility with Unix, Mac OS X and Windows.
50

VERSION DIFFERENCES

52       NOTE: 2.0000 introduces Unicode support in the File back end. This
53       necessitates a change in the file format. The version 1.0003 format is
54       still supported, though, so we have added a db_version() method. If you
55       are using a database in the old format, you will have to upgrade it to
56       get Unicode support.
57
58       NOTE: 1.0020 introduces different engines which are backed by different
59       types of storage. There is the original storage (called 'File') and a
60       database storage (called 'DBI'). q.v. "PLUGINS" for more information.
61
62       NOTE: 1.0000 has significant file format differences from prior
63       versions.  There is a backwards-compatibility layer at
64       "utils/upgrade_db.pl". Files created by 1.0000 or higher are NOT
65       compatible with scripts using prior versions.
66

PLUGINS

68       DBM::Deep is a wrapper around different storage engines. These are:
69
70   File
71       This is the traditional storage engine, storing the data to a custom
72       file format. The parameters accepted are:
73
74       ·   file
75
76           Filename of the DB file to link the handle to. You can pass a full
77           absolute filesystem path, partial path, or a plain filename if the
78           file is in the current working directory. This is a required
79           parameter (though q.v. fh).
80
81       ·   fh
82
83           If you want, you can pass in the fh instead of the file. This is
84           most useful for doing something like:
85
86             my $db = DBM::Deep->new( { fh => \*DATA } );
87
88           You are responsible for making sure that the fh has been opened
89           appropriately for your needs. If you open it read-only and attempt
90           to write, an exception will be thrown. If you open it write-only or
91           append-only, an exception will be thrown immediately as DBM::Deep
92           needs to read from the fh.
93
94       ·   file_offset
95
96           This is the offset within the file that the DBM::Deep db starts.
97           Most of the time, you will not need to set this. However, it's
98           there if you want it.
99
100           If you pass in fh and do not set this, it will be set
101           appropriately.
102
103       ·   locking
104
105           Specifies whether locking is to be enabled. DBM::Deep uses Perl's
106           flock() function to lock the database in exclusive mode for writes,
107           and shared mode for reads. Pass any true value to enable. This
108           affects the base DB handle and any child hashes or arrays that use
109           the same DB file. This is an optional parameter, and defaults to 1
110           (enabled). See "LOCKING" below for more.
111
112       When you open an existing database file, the version of the database
113       format will stay the same. But if you are creating a new file, it will
114       be in the latest format.
115
116   DBI
117       This is a storage engine that stores the data in a relational database.
118       Funnily enough, this engine doesn't work with transactions (yet) as
119       InnoDB doesn't do what DBM::Deep needs it to do.
120
121       The parameters accepted are:
122
123       ·   dbh
124
125           This is a DBH that's already been opened with "connect" in DBI.
126
127       ·   dbi
128
129           This is a hashref containing:
130
131           ·   dsn
132
133           ·   username
134
135           ·   password
136
137           ·   connect_args
138
139           These correspond to the 4 parameters "connect" in DBI takes.
140
141       NOTE: This has only been tested with MySQL and SQLite (with
142       disappointing results). I plan on extending this to work with
143       PostgreSQL in the near future. Oracle, Sybase, and other engines will
144       come later.
145
146   Planned engines
147       There are plans to extend this functionality to (at least) the
148       following:
149
150       ·   BDB (and other hash engines like memcached)
151
152       ·   NoSQL engines (such as Tokyo Cabinet)
153
154       ·   DBIx::Class (and other ORMs)
155

SETUP

157       Construction can be done OO-style (which is the recommended way), or
158       using Perl's tie() function. Both are examined here.
159
160   OO Construction
161       The recommended way to construct a DBM::Deep object is to use the new()
162       method, which gets you a blessed and tied hash (or array) reference.
163
164         my $db = DBM::Deep->new( "foo.db" );
165
166       This opens a new database handle, mapped to the file "foo.db". If this
167       file does not exist, it will automatically be created. DB files are
168       opened in "r+" (read/write) mode, and the type of object returned is a
169       hash, unless otherwise specified (see "Options" below).
170
171       You can pass a number of options to the constructor to specify things
172       like locking, autoflush, etc. This is done by passing an inline hash
173       (or hashref):
174
175         my $db = DBM::Deep->new(
176             file      => "foo.db",
177             locking   => 1,
178             autoflush => 1
179         );
180
181       Notice that the filename is now specified inside the hash with the
182       "file" parameter, as opposed to being the sole argument to the
183       constructor. This is required if any options are specified.  See
184       "Options" below for the complete list.
185
186       You can also start with an array instead of a hash. For this, you must
187       specify the "type" parameter:
188
189         my $db = DBM::Deep->new(
190             file => "foo.db",
191             type => DBM::Deep->TYPE_ARRAY
192         );
193
194       Note: Specifying the "type" parameter only takes effect when beginning
195       a new DB file. If you create a DBM::Deep object with an existing file,
196       the "type" will be loaded from the file header, and an error will be
197       thrown if the wrong type is passed in.
198
199   Tie Construction
200       Alternately, you can create a DBM::Deep handle by using Perl's built-in
201       tie() function. The object returned from tie() can be used to call
202       methods, such as lock() and unlock(). (That object can be retrieved
203       from the tied variable at any time using tied() - please see perltie
204       for more info.)
205
206         my %hash;
207         my $db = tie %hash, "DBM::Deep", "foo.db";
208
209         my @array;
210         my $db = tie @array, "DBM::Deep", "bar.db";
211
212       As with the OO constructor, you can replace the DB filename parameter
213       with a hash containing one or more options (see "Options" just below
214       for the complete list).
215
216         tie %hash, "DBM::Deep", {
217             file => "foo.db",
218             locking => 1,
219             autoflush => 1
220         };
221
222   Options
223       There are a number of options that can be passed in when constructing
224       your DBM::Deep objects. These apply to both the OO- and tie- based
225       approaches.
226
227       ·   type
228
229           This parameter specifies what type of object to create, a hash or
230           array. Use one of these two constants:
231
232           ·   "DBM::Deep->TYPE_HASH"
233
234           ·   "DBM::Deep->TYPE_ARRAY"
235
236           This only takes effect when beginning a new file. This is an
237           optional parameter, and defaults to "DBM::Deep->TYPE_HASH".
238
239       ·   autoflush
240
241           Specifies whether autoflush is to be enabled on the underlying
242           filehandle.  This obviously slows down write operations, but is
243           required if you may have multiple processes accessing the same DB
244           file (also consider enable locking).  Pass any true value to
245           enable. This is an optional parameter, and defaults to 1 (enabled).
246
247       ·   filter_*
248
249           See "FILTERS" below.
250
251       The following parameters may be specified in the constructor the first
252       time the datafile is created. However, they will be stored in the
253       header of the file and cannot be overridden by subsequent openings of
254       the file - the values will be set from the values stored in the
255       datafile's header.
256
257       ·   num_txns
258
259           This is the number of transactions that can be running at one time.
260           The default is one - the HEAD. The minimum is one and the maximum
261           is 255. The more transactions, the larger and quicker the datafile
262           grows.
263
264           Simple access to a database, regardless of how many processes are
265           doing it, already counts as one transaction (the HEAD). So, if you
266           want, say, 5 processes to be able to call begin_work at the same
267           time, "num_txns" must be at least 6.
268
269           See "TRANSACTIONS" below.
270
271       ·   max_buckets
272
273           This is the number of entries that can be added before a
274           reindexing. The larger this number is made, the larger a file gets,
275           but the better performance you will have. The default and minimum
276           number this can be is 16. The maximum is 256, but more than 64
277           isn't recommended.
278
279       ·   data_sector_size
280
281           This is the size in bytes of a given data sector. Data sectors will
282           chain, so a value of any size can be stored. However, chaining is
283           expensive in terms of time. Setting this value to something close
284           to the expected common length of your scalars will improve your
285           performance. If it is too small, your file will have a lot of
286           chaining. If it is too large, your file will have a lot of dead
287           space in it.
288
289           The default for this is 64 bytes. The minimum value is 32 and the
290           maximum is 256 bytes.
291
292           Note: There are between 6 and 10 bytes taken up in each data sector
293           for bookkeeping. (It's 4 + the number of bytes in your
294           "pack_size".) This is included within the data_sector_size, thus
295           the effective value is 6-10 bytes less than what you specified.
296
297           Another note: If your strings contain any characters beyond the
298           byte range, they will be encoded as UTF-8 before being stored in
299           the file. This will make all non-ASCII characters take up more than
300           one byte each.
301
302       ·   pack_size
303
304           This is the size of the file pointer used throughout the file. The
305           valid values are:
306
307           ·   small
308
309               This uses 2-byte offsets, allowing for a maximum file size of
310               65 KB.
311
312           ·   medium (default)
313
314               This uses 4-byte offsets, allowing for a maximum file size of 4
315               GB.
316
317           ·   large
318
319               This uses 8-byte offsets, allowing for a maximum file size of
320               16 XB (exabytes). This can only be enabled if your Perl is
321               compiled for 64-bit.
322
323           See "LARGEFILE SUPPORT" for more information.
324
325       ·   external_refs
326
327           This is a boolean option. When enabled, it allows external
328           references to database entries to hold on to those entries, even
329           when they are deleted.
330
331           To illustrate, if you retrieve a hash (or array) reference from the
332           database,
333
334             $foo_hash = $db->{foo};
335
336           the hash reference is still tied to the database. So if you
337
338             delete $db->{foo};
339
340           $foo_hash will point to a location in the DB that is no longer
341           valid (we call this a stale reference). So if you try to retrieve
342           the data from $foo_hash,
343
344             for(keys %$foo_hash) {
345
346           you will get an error.
347
348           The "external_refs" option causes $foo_hash to 'hang on' to the DB
349           entry, so it will not be deleted from the database if there is
350           still a reference to it in a running program. It will be deleted,
351           instead, when the $foo_hash variable no longer exists, or is
352           overwritten.
353
354           This has the potential to cause database bloat if your program
355           crashes, so it is not enabled by default. (See also the "export"
356           method for an alternative workaround.)
357

TIE INTERFACE

359       With DBM::Deep you can access your databases using Perl's standard
360       hash/array syntax. Because all DBM::Deep objects are tied to hashes or
361       arrays, you can treat them as such (but see "external_refs", above, and
362       "Stale References", below). DBM::Deep will intercept all reads/writes
363       and direct them to the right place -- the DB file. This has nothing to
364       do with the "Tie Construction" section above. This simply tells you how
365       to use DBM::Deep using regular hashes and arrays, rather than calling
366       functions like "get()" and "put()" (although those work too). It is
367       entirely up to you how to want to access your databases.
368
369   Hashes
370       You can treat any DBM::Deep object like a normal Perl hash reference.
371       Add keys, or even nested hashes (or arrays) using standard Perl syntax:
372
373         my $db = DBM::Deep->new( "foo.db" );
374
375         $db->{mykey} = "myvalue";
376         $db->{myhash} = {};
377         $db->{myhash}->{subkey} = "subvalue";
378
379         print $db->{myhash}->{subkey} . "\n";
380
381       You can even step through hash keys using the normal Perl "keys()"
382       function:
383
384         foreach my $key (keys %$db) {
385             print "$key: " . $db->{$key} . "\n";
386         }
387
388       Remember that Perl's "keys()" function extracts every key from the hash
389       and pushes them onto an array, all before the loop even begins. If you
390       have an extremely large hash, this may exhaust Perl's memory. Instead,
391       consider using Perl's "each()" function, which pulls keys/values one at
392       a time, using very little memory:
393
394         while (my ($key, $value) = each %$db) {
395             print "$key: $value\n";
396         }
397
398       Please note that when using "each()", you should always pass a direct
399       hash reference, not a lookup. Meaning, you should never do this:
400
401         # NEVER DO THIS
402         while (my ($key, $value) = each %{$db->{foo}}) { # BAD
403
404       This causes an infinite loop, because for each iteration, Perl is
405       calling FETCH() on the $db handle, resulting in a "new" hash for foo
406       every time, so it effectively keeps returning the first key over and
407       over again. Instead, assign a temporary variable to "$db->{foo}", then
408       pass that to each().
409
410   Arrays
411       As with hashes, you can treat any DBM::Deep object like a normal Perl
412       array reference. This includes inserting, removing and manipulating
413       elements, and the "push()", "pop()", "shift()", "unshift()" and
414       "splice()" functions.  The object must have first been created using
415       type "DBM::Deep->TYPE_ARRAY", or simply be a nested array reference
416       inside a hash. Example:
417
418         my $db = DBM::Deep->new(
419             file => "foo-array.db",
420             type => DBM::Deep->TYPE_ARRAY
421         );
422
423         $db->[0] = "foo";
424         push @$db, "bar", "baz";
425         unshift @$db, "bah";
426
427         my $last_elem   = pop @$db;   # baz
428         my $first_elem  = shift @$db; # bah
429         my $second_elem = $db->[1];   # bar
430
431         my $num_elements = scalar @$db;
432

OO INTERFACE

434       In addition to the tie() interface, you can also use a standard OO
435       interface to manipulate all aspects of DBM::Deep databases. Each type
436       of object (hash or array) has its own methods, but both types share the
437       following common methods: "put()", "get()", "exists()", "delete()" and
438       "clear()". "fetch()" and "store()" are aliases to "put()" and "get()",
439       respectively.
440
441       ·   new() / clone()
442
443           These are the constructor and copy-functions.
444
445       ·   put() / store()
446
447           Stores a new hash key/value pair, or sets an array element value.
448           Takes two arguments, the hash key or array index, and the new
449           value. The value can be a scalar, hash ref or array ref. Returns
450           true on success, false on failure.
451
452             $db->put("foo", "bar"); # for hashes
453             $db->put(1, "bar"); # for arrays
454
455       ·   get() / fetch()
456
457           Fetches the value of a hash key or array element. Takes one
458           argument: the hash key or array index. Returns a scalar, hash ref
459           or array ref, depending on the data type stored.
460
461             my $value = $db->get("foo"); # for hashes
462             my $value = $db->get(1); # for arrays
463
464       ·   exists()
465
466           Checks if a hash key or array index exists. Takes one argument: the
467           hash key or array index. Returns true if it exists, false if not.
468
469             if ($db->exists("foo")) { print "yay!\n"; } # for hashes
470             if ($db->exists(1)) { print "yay!\n"; } # for arrays
471
472       ·   delete()
473
474           Deletes one hash key/value pair or array element. Takes one
475           argument: the hash key or array index. Returns the data that the
476           element used to contain (just like Perl's "delete" function), which
477           is "undef" if it did not exist. For arrays, the remaining elements
478           located after the deleted element are NOT moved over. The deleted
479           element is essentially just undefined, which is exactly how Perl's
480           internal arrays work.
481
482             $db->delete("foo"); # for hashes
483             $db->delete(1); # for arrays
484
485       ·   clear()
486
487           Deletes all hash keys or array elements. Takes no arguments. No
488           return value.
489
490             $db->clear(); # hashes or arrays
491
492       ·   lock() / unlock() / lock_exclusive() / lock_shared()
493
494           q.v. "LOCKING" for more info.
495
496       ·   optimize()
497
498           This will compress the datafile so that it takes up as little space
499           as possible.  There is a freespace manager so that when space is
500           freed up, it is used before extending the size of the datafile.
501           But, that freespace just sits in the datafile unless "optimize()"
502           is called.
503
504           "optimize" basically copies everything into a new database, so, if
505           it is in version 1.0003 format, it will be upgraded.
506
507       ·   import()
508
509           Unlike simple assignment, "import()" does not tie the right-hand
510           side. Instead, a copy of your data is put into the DB. "import()"
511           takes either an arrayref (if your DB is an array) or a hashref (if
512           your DB is a hash). "import()" will die if anything else is passed
513           in.
514
515       ·   export()
516
517           This returns a complete copy of the data structure at the point you
518           do the export.  This copy is in RAM, not on disk like the DB is.
519
520       ·   begin_work() / commit() / rollback()
521
522           These are the transactional functions. "TRANSACTIONS" for more
523           information.
524
525       ·   supports( $option )
526
527           This returns a boolean indicating whether this instance of
528           DBM::Deep supports that feature. $option can be one of:
529
530           ·   transactions
531
532           ·   unicode
533
534       ·   db_version()
535
536           This returns the version of the database format that the current
537           database is in. This is specified as the earliest version of
538           DBM::Deep that supports it.
539
540           For the File back end, this will be 1.0003 or 2.
541
542           For the DBI back end, it is currently always 1.0020.
543
544   Hashes
545       For hashes, DBM::Deep supports all the common methods described above,
546       and the following additional methods: "first_key()" and "next_key()".
547
548       ·   first_key()
549
550           Returns the "first" key in the hash. As with built-in Perl hashes,
551           keys are fetched in an undefined order (which appears random).
552           Takes no arguments, returns the key as a scalar value.
553
554             my $key = $db->first_key();
555
556       ·   next_key()
557
558           Returns the "next" key in the hash, given the previous one as the
559           sole argument.  Returns undef if there are no more keys to be
560           fetched.
561
562             $key = $db->next_key($key);
563
564       Here are some examples of using hashes:
565
566         my $db = DBM::Deep->new( "foo.db" );
567
568         $db->put("foo", "bar");
569         print "foo: " . $db->get("foo") . "\n";
570
571         $db->put("baz", {}); # new child hash ref
572         $db->get("baz")->put("buz", "biz");
573         print "buz: " . $db->get("baz")->get("buz") . "\n";
574
575         my $key = $db->first_key();
576         while ($key) {
577             print "$key: " . $db->get($key) . "\n";
578             $key = $db->next_key($key);
579         }
580
581         if ($db->exists("foo")) { $db->delete("foo"); }
582
583   Arrays
584       For arrays, DBM::Deep supports all the common methods described above,
585       and the following additional methods: "length()", "push()", "pop()",
586       "shift()", "unshift()" and "splice()".
587
588       ·   length()
589
590           Returns the number of elements in the array. Takes no arguments.
591
592             my $len = $db->length();
593
594       ·   push()
595
596           Adds one or more elements onto the end of the array. Accepts
597           scalars, hash refs or array refs. No return value.
598
599             $db->push("foo", "bar", {});
600
601       ·   pop()
602
603           Fetches the last element in the array, and deletes it. Takes no
604           arguments.  Returns undef if array is empty. Returns the element
605           value.
606
607             my $elem = $db->pop();
608
609       ·   shift()
610
611           Fetches the first element in the array, deletes it, then shifts all
612           the remaining elements over to take up the space. Returns the
613           element value. This method is not recommended with large arrays --
614           see "Large Arrays" below for details.
615
616             my $elem = $db->shift();
617
618       ·   unshift()
619
620           Inserts one or more elements onto the beginning of the array,
621           shifting all existing elements over to make room. Accepts scalars,
622           hash refs or array refs.  No return value. This method is not
623           recommended with large arrays -- see <Large Arrays> below for
624           details.
625
626             $db->unshift("foo", "bar", {});
627
628       ·   splice()
629
630           Performs exactly like Perl's built-in function of the same name.
631           See "splice" in perlfunc for usage -- it is too complicated to
632           document here. This method is not recommended with large arrays --
633           see "Large Arrays" below for details.
634
635       Here are some examples of using arrays:
636
637         my $db = DBM::Deep->new(
638             file => "foo.db",
639             type => DBM::Deep->TYPE_ARRAY
640         );
641
642         $db->push("bar", "baz");
643         $db->unshift("foo");
644         $db->put(3, "buz");
645
646         my $len = $db->length();
647         print "length: $len\n"; # 4
648
649         for (my $k=0; $k<$len; $k++) {
650             print "$k: " . $db->get($k) . "\n";
651         }
652
653         $db->splice(1, 2, "biz", "baf");
654
655         while (my $elem = shift @$db) {
656             print "shifted: $elem\n";
657         }
658

LOCKING

660       Enable or disable automatic file locking by passing a boolean value to
661       the "locking" parameter when constructing your DBM::Deep object (see
662       "SETUP" above).
663
664         my $db = DBM::Deep->new(
665             file => "foo.db",
666             locking => 1
667         );
668
669       This causes DBM::Deep to "flock()" the underlying filehandle with
670       exclusive mode for writes, and shared mode for reads. This is required
671       if you have multiple processes accessing the same database file, to
672       avoid file corruption.  Please note that "flock()" does NOT work for
673       files over NFS. See "DB over NFS" below for more.
674
675   Explicit Locking
676       You can explicitly lock a database, so it remains locked for multiple
677       actions. This is done by calling the "lock_exclusive()" method (for
678       when you want to write) or the "lock_shared()" method (for when you
679       want to read).  This is particularly useful for things like counters,
680       where the current value needs to be fetched, then incremented, then
681       stored again.
682
683         $db->lock_exclusive();
684         my $counter = $db->get("counter");
685         $counter++;
686         $db->put("counter", $counter);
687         $db->unlock();
688
689         # or...
690
691         $db->lock_exclusive();
692         $db->{counter}++;
693         $db->unlock();
694
695   Win32/Cygwin
696       Due to Win32 actually enforcing the read-only status of a shared lock,
697       all locks on Win32 and cygwin are exclusive. This is because of how
698       autovivification currently works. Hopefully, this will go away in a
699       future release.
700

IMPORTING/EXPORTING

702       You can import existing complex structures by calling the "import()"
703       method, and export an entire database into an in-memory structure using
704       the "export()" method. Both are examined here.
705
706   Importing
707       Say you have an existing hash with nested hashes/arrays inside it.
708       Instead of walking the structure and adding keys/elements to the
709       database as you go, simply pass a reference to the "import()" method.
710       This recursively adds everything to an existing DBM::Deep object for
711       you. Here is an example:
712
713         my $struct = {
714             key1 => "value1",
715             key2 => "value2",
716             array1 => [ "elem0", "elem1", "elem2" ],
717             hash1 => {
718                 subkey1 => "subvalue1",
719                 subkey2 => "subvalue2"
720             }
721         };
722
723         my $db = DBM::Deep->new( "foo.db" );
724         $db->import( $struct );
725
726         print $db->{key1} . "\n"; # prints "value1"
727
728       This recursively imports the entire $struct object into $db, including
729       all nested hashes and arrays. If the DBM::Deep object contains existing
730       data, keys are merged with the existing ones, replacing if they already
731       exist.  The "import()" method can be called on any database level (not
732       just the base level), and works with both hash and array DB types.
733
734       Note: Make sure your existing structure has no circular references in
735       it.  These will cause an infinite loop when importing. There are plans
736       to fix this in a later release.
737
738   Exporting
739       Calling the "export()" method on an existing DBM::Deep object will
740       return a reference to a new in-memory copy of the database. The export
741       is done recursively, so all nested hashes/arrays are all exported to
742       standard Perl objects. Here is an example:
743
744         my $db = DBM::Deep->new( "foo.db" );
745
746         $db->{key1} = "value1";
747         $db->{key2} = "value2";
748         $db->{hash1} = {};
749         $db->{hash1}->{subkey1} = "subvalue1";
750         $db->{hash1}->{subkey2} = "subvalue2";
751
752         my $struct = $db->export();
753
754         print $struct->{key1} . "\n"; # prints "value1"
755
756       This makes a complete copy of the database in memory, and returns a
757       reference to it. The "export()" method can be called on any database
758       level (not just the base level), and works with both hash and array DB
759       types. Be careful of large databases -- you can store a lot more data
760       in a DBM::Deep object than an in-memory Perl structure.
761
762       Note: Make sure your database has no circular references in it.  These
763       will cause an infinite loop when exporting. There are plans to fix this
764       in a later release.
765

FILTERS

767       DBM::Deep has a number of hooks where you can specify your own Perl
768       function to perform filtering on incoming or outgoing data. This is a
769       perfect way to extend the engine, and implement things like real-time
770       compression or encryption. Filtering applies to the base DB level, and
771       all child hashes / arrays. Filter hooks can be specified when your
772       DBM::Deep object is first constructed, or by calling the "set_filter()"
773       method at any time. There are four available filter hooks.
774
775   set_filter()
776       This method takes two parameters - the filter type and the filter
777       subreference.  The four types are:
778
779       ·   filter_store_key
780
781           This filter is called whenever a hash key is stored. It is passed
782           the incoming key, and expected to return a transformed key.
783
784       ·   filter_store_value
785
786           This filter is called whenever a hash key or array element is
787           stored. It is passed the incoming value, and expected to return a
788           transformed value.
789
790       ·   filter_fetch_key
791
792           This filter is called whenever a hash key is fetched (i.e. via
793           "first_key()" or "next_key()"). It is passed the transformed key,
794           and expected to return the plain key.
795
796       ·   filter_fetch_value
797
798           This filter is called whenever a hash key or array element is
799           fetched.  It is passed the transformed value, and expected to
800           return the plain value.
801
802       Here are the two ways to setup a filter hook:
803
804         my $db = DBM::Deep->new(
805             file => "foo.db",
806             filter_store_value => \&my_filter_store,
807             filter_fetch_value => \&my_filter_fetch
808         );
809
810         # or...
811
812         $db->set_filter( "store_value", \&my_filter_store );
813         $db->set_filter( "fetch_value", \&my_filter_fetch );
814
815       Your filter function will be called only when dealing with SCALAR keys
816       or values. When nested hashes and arrays are being stored/fetched,
817       filtering is bypassed. Filters are called as static functions, passed a
818       single SCALAR argument, and expected to return a single SCALAR value.
819       If you want to remove a filter, set the function reference to "undef":
820
821         $db->set_filter( "store_value", undef );
822
823   Examples
824       Please read DBM::Deep::Cookbook for examples of filters.
825

ERROR HANDLING

827       Most DBM::Deep methods return a true value for success, and call die()
828       on failure. You can wrap calls in an eval block to catch the die.
829
830         my $db = DBM::Deep->new( "foo.db" ); # create hash
831         eval { $db->push("foo"); }; # ILLEGAL -- push is array-only call
832
833         print $@;           # prints error message
834

LARGEFILE SUPPORT

836       If you have a 64-bit system, and your Perl is compiled with both
837       LARGEFILE and 64-bit support, you may be able to create databases
838       larger than 4 GB.  DBM::Deep by default uses 32-bit file offset tags,
839       but these can be changed by specifying the 'pack_size' parameter when
840       constructing the file.
841
842         DBM::Deep->new(
843             file      => $filename,
844             pack_size => 'large',
845         );
846
847       This tells DBM::Deep to pack all file offsets with 8-byte (64-bit) quad
848       words instead of 32-bit longs. After setting these values your DB files
849       have a theoretical maximum size of 16 XB (exabytes).
850
851       You can also use "pack_size => 'small'" in order to use 16-bit file
852       offsets.
853
854       Note: Changing these values will NOT work for existing database files.
855       Only change this for new files. Once the value has been set, it is
856       stored in the file's header and cannot be changed for the life of the
857       file. These parameters are per-file, meaning you can access 32-bit and
858       64-bit files, as you choose.
859
860       Note: We have not personally tested files larger than 4 GB -- all our
861       systems have only a 32-bit Perl. However, we have received user reports
862       that this does indeed work.
863

LOW-LEVEL ACCESS

865       If you require low-level access to the underlying filehandle that
866       DBM::Deep uses, you can call the "_fh()" method, which returns the
867       handle:
868
869         my $fh = $db->_fh();
870
871       This method can be called on the root level of the database, or any
872       child hashes or arrays. All levels share a root structure, which
873       contains things like the filehandle, a reference counter, and all the
874       options specified when you created the object. You can get access to
875       this file object by calling the "_storage()" method.
876
877         my $file_obj = $db->_storage();
878
879       This is useful for changing options after the object has already been
880       created, such as enabling/disabling locking. You can also store your
881       own temporary user data in this structure (be wary of name collision),
882       which is then accessible from any child hash or array.
883

CIRCULAR REFERENCES

885       DBM::Deep has full support for circular references. Meaning you can
886       have a nested hash key or array element that points to a parent object.
887       This relationship is stored in the DB file, and is preserved between
888       sessions.  Here is an example:
889
890         my $db = DBM::Deep->new( "foo.db" );
891
892         $db->{foo} = "bar";
893         $db->{circle} = $db; # ref to self
894
895         print $db->{foo} . "\n"; # prints "bar"
896         print $db->{circle}->{foo} . "\n"; # prints "bar" again
897
898       This also works as expected with array and hash references. So, the
899       following works as expected:
900
901         $db->{foo} = [ 1 .. 3 ];
902         $db->{bar} = $db->{foo};
903
904         push @{$db->{foo}}, 42;
905         is( $db->{bar}[-1], 42 ); # Passes
906
907       This, however, does not extend to assignments from one DB file to
908       another.  So, the following will throw an error:
909
910         my $db1 = DBM::Deep->new( "foo.db" );
911         my $db2 = DBM::Deep->new( "bar.db" );
912
913         $db1->{foo} = [];
914         $db2->{foo} = $db1->{foo}; # dies
915
916       Note: Passing the object to a function that recursively walks the
917       object tree (such as Data::Dumper or even the built-in "optimize()" or
918       "export()" methods) will result in an infinite loop. This will be fixed
919       in a future release by adding singleton support.
920

TRANSACTIONS

922       As of 1.0000, DBM::Deep has ACID transactions. Every DBM::Deep object
923       is completely transaction-ready - it is not an option you have to turn
924       on. You do have to specify how many transactions may run simultaneously
925       (q.v. "num_txns").
926
927       Three new methods have been added to support them. They are:
928
929       ·   begin_work()
930
931           This starts a transaction.
932
933       ·   commit()
934
935           This applies the changes done within the transaction to the
936           mainline and ends the transaction.
937
938       ·   rollback()
939
940           This discards the changes done within the transaction to the
941           mainline and ends the transaction.
942
943       Transactions in DBM::Deep are done using a variant of the MVCC method,
944       the same method used by the InnoDB MySQL engine.
945

MIGRATION

947       As of 1.0000, the file format has changed. To aid in upgrades, a
948       migration script is provided within the CPAN distribution, called
949       utils/upgrade_db.pl.
950
951       NOTE: This script is not installed onto your system because it carries
952       a copy of every version prior to the current version.
953
954       As of version 2.0000, databases created by old versions back to 1.0003
955       can be read, but new features may not be available unless the database
956       is upgraded first.
957

TODO

959       The following are items that are planned to be added in future
960       releases. These are separate from the "CAVEATS, ISSUES & BUGS" below.
961
962   Sub-Transactions
963       Right now, you cannot run a transaction within a transaction. Removing
964       this restriction is technically straightforward, but the combinatorial
965       explosion of possible usecases hurts my head. If this is something you
966       want to see immediately, please submit many testcases.
967
968   Caching
969       If a client is willing to assert upon opening the file that this
970       process will be the only consumer of that datafile, then there are a
971       number of caching possibilities that can be taken advantage of. This
972       does, however, mean that DBM::Deep is more vulnerable to losing data
973       due to unflushed changes. It also means a much larger in-memory
974       footprint. As such, it's not clear exactly how this should be done.
975       Suggestions are welcome.
976
977   Ram-only
978       The techniques used in DBM::Deep simply require a seekable contiguous
979       datastore. This could just as easily be a large string as a file. By
980       using substr, the STM capabilities of DBM::Deep could be used within a
981       single-process. I have no idea how I'd specify this, though.
982       Suggestions are welcome.
983
984   Different contention resolution mechanisms
985       Currently, the only contention resolution mechanism is last-write-wins.
986       This is the mechanism used by most RDBMSes and should be good enough
987       for most uses.  For advanced uses of STM, other contention mechanisms
988       will be needed. If you have an idea of how you'd like to see contention
989       resolution in DBM::Deep, please let me know.
990

CAVEATS, ISSUES & BUGS

992       This section describes all the known issues with DBM::Deep. These are
993       issues that are either intractable or depend on some feature within
994       Perl working exactly right. It you have found something that is not
995       listed below, please send an e-mail to bug-DBM-Deep@rt.cpan.org
996       <mailto:bug-DBM-Deep@rt.cpan.org>.  Likewise, if you think you know of
997       a way around one of these issues, please let me know.
998
999   References
1000       (The following assumes a high level of Perl understanding, specifically
1001       of references. Most users can safely skip this section.)
1002
1003       Currently, the only references supported are HASH and ARRAY. The other
1004       reference types (SCALAR, CODE, GLOB, and REF) cannot be supported for
1005       various reasons.
1006
1007       ·   GLOB
1008
1009           These are things like filehandles and other sockets. They can't be
1010           supported because it's completely unclear how DBM::Deep should
1011           serialize them.
1012
1013       ·   SCALAR / REF
1014
1015           The discussion here refers to the following type of example:
1016
1017             my $x = 25;
1018             $db->{key1} = \$x;
1019
1020             $x = 50;
1021
1022             # In some other process ...
1023
1024             my $val = ${ $db->{key1} };
1025
1026             is( $val, 50, "What actually gets stored in the DB file?" );
1027
1028           The problem is one of synchronization. When the variable being
1029           referred to changes value, the reference isn't notified, which is
1030           kind of the point of references. This means that the new value
1031           won't be stored in the datafile for other processes to read. There
1032           is no TIEREF.
1033
1034           It is theoretically possible to store references to values already
1035           within a DBM::Deep object because everything already is
1036           synchronized, but the change to the internals would be quite large.
1037           Specifically, DBM::Deep would have to tie every single value that
1038           is stored. This would bloat the RAM footprint of DBM::Deep at least
1039           twofold (if not more) and be a significant performance drain, all
1040           to support a feature that has never been requested.
1041
1042       ·   CODE
1043
1044           Data::Dump::Streamer provides a mechanism for serializing coderefs,
1045           including saving off all closure state. This would allow for
1046           DBM::Deep to store the code for a subroutine. Then, whenever the
1047           subroutine is read, the code could be "eval()"'ed into being.
1048           However, just as for SCALAR and REF, that closure state may change
1049           without notifying the DBM::Deep object storing the reference.
1050           Again, this would generally be considered a feature.
1051
1052   External references and transactions
1053       If you do "my $x = $db->{foo};", then start a transaction, $x will be
1054       referencing the database from outside the transaction. A fix for this
1055       (and other issues with how external references into the database) is
1056       being looked into. This is the skipped set of tests in
1057       t/39_singletons.t and a related issue is the focus of
1058       t/37_delete_edge_cases.t
1059
1060   File corruption
1061       The current level of error handling in DBM::Deep is minimal. Files are
1062       checked for a 32-bit signature when opened, but any other form of
1063       corruption in the datafile can cause segmentation faults. DBM::Deep may
1064       try to "seek()" past the end of a file, or get stuck in an infinite
1065       loop depending on the level and type of corruption. File write
1066       operations are not checked for failure (for speed), so if you happen to
1067       run out of disk space, DBM::Deep will probably fail in a bad way. These
1068       things will be addressed in a later version of DBM::Deep.
1069
1070   DB over NFS
1071       Beware of using DBM::Deep files over NFS. DBM::Deep uses flock(), which
1072       works well on local filesystems, but will NOT protect you from file
1073       corruption over NFS. I've heard about setting up your NFS server with a
1074       locking daemon, then using "lockf()" to lock your files, but your
1075       mileage may vary there as well.  From what I understand, there is no
1076       real way to do it. However, if you need access to the underlying
1077       filehandle in DBM::Deep for using some other kind of locking scheme
1078       like "lockf()", see the "LOW-LEVEL ACCESS" section above.
1079
1080   Copying Objects
1081       Beware of copying tied objects in Perl. Very strange things can happen.
1082       Instead, use DBM::Deep's "clone()" method which safely copies the
1083       object and returns a new, blessed and tied hash or array to the same
1084       level in the DB.
1085
1086         my $copy = $db->clone();
1087
1088       Note: Since clone() here is cloning the object, not the database
1089       location, any modifications to either $db or $copy will be visible to
1090       both.
1091
1092   Stale References
1093       If you take a reference to an array or hash from the database, it is
1094       tied to the database itself. This means that if the datum in question
1095       is subsequently deleted from the database, the reference to it will
1096       point to an invalid location and unpredictable things will happen if
1097       you try to use it.
1098
1099       So a seemingly innocuous piece of code like this:
1100
1101         my %hash = %{ $db->{some_hash} };
1102
1103       can fail if another process deletes or clobbers "$db->{some_hash}"
1104       while the data are being extracted, since "%{ ... }" is not atomic.
1105       (This actually happened.) The solution is to lock the database before
1106       reading the data:
1107
1108         $db->lock_exclusive;
1109         my %hash = %{ $db->{some_hash} };
1110         $db->unlock;
1111
1112       As of version 1.0024, if you assign a stale reference to a location in
1113       the database, DBM::Deep will warn, if you have uninitialized warnings
1114       enabled, and treat the stale reference as "undef". An attempt to use a
1115       stale reference as an array or hash reference will cause an error.
1116
1117   Large Arrays
1118       Beware of using "shift()", "unshift()" or "splice()" with large arrays.
1119       These functions cause every element in the array to move, which can be
1120       murder on DBM::Deep, as every element has to be fetched from disk, then
1121       stored again in a different location. This will be addressed in a
1122       future version.
1123
1124       This has been somewhat addressed so that the cost is constant,
1125       regardless of what is stored at those locations. So, small arrays with
1126       huge data structures in them are faster. But, large arrays are still
1127       large.
1128
1129   Writeonly Files
1130       If you pass in a filehandle to new(), you may have opened it in either
1131       a readonly or writeonly mode. STORE will verify that the filehandle is
1132       writable.  However, there doesn't seem to be a good way to determine if
1133       a filehandle is readable. And, if the filehandle isn't readable, it's
1134       not clear what will happen. So, don't do that.
1135
1136   Assignments Within Transactions
1137       The following will not work as one might expect:
1138
1139         my $x = { a => 1 };
1140
1141         $db->begin_work;
1142         $db->{foo} = $x;
1143         $db->rollback;
1144
1145         is( $x->{a}, 1 ); # This will fail!
1146
1147       The problem is that the moment a reference used as the rvalue to a
1148       DBM::Deep object's lvalue, it becomes tied itself. This is so that
1149       future changes to $x can be tracked within the DBM::Deep file and is
1150       considered to be a feature. By the time the rollback occurs, there is
1151       no knowledge that there had been an $x or what memory location to
1152       assign an "export()" to.
1153
1154       NOTE: This does not affect importing because imports do a walk over the
1155       reference to be imported in order to explicitly leave it untied.
1156

CODE COVERAGE

1158       Devel::Cover is used to test the code coverage of the tests. Below is
1159       the Devel::Cover report on this distribution's test suite.
1160
1161         ---------------------------- ------ ------ ------ ------ ------ ------ ------
1162         File                           stmt   bran   cond    sub    pod   time  total
1163         ---------------------------- ------ ------ ------ ------ ------ ------ ------
1164         blib/lib/DBM/Deep.pm          100.0   89.1   82.9  100.0  100.0   32.5   98.1
1165         blib/lib/DBM/Deep/Array.pm    100.0   94.4  100.0  100.0  100.0    5.2   98.8
1166         blib/lib/DBM/Deep/Engine.pm   100.0   92.9  100.0  100.0  100.0    7.4  100.0
1167         ...ib/DBM/Deep/Engine/DBI.pm   95.0   73.1  100.0  100.0  100.0    1.5   90.4
1168         ...b/DBM/Deep/Engine/File.pm   92.3   78.5   88.9  100.0  100.0    4.9   90.3
1169         blib/lib/DBM/Deep/Hash.pm     100.0  100.0  100.0  100.0  100.0    3.8  100.0
1170         .../lib/DBM/Deep/Iterator.pm  100.0    n/a    n/a  100.0  100.0    0.0  100.0
1171         .../DBM/Deep/Iterator/DBI.pm  100.0  100.0    n/a  100.0  100.0    1.2  100.0
1172         ...DBM/Deep/Iterator/File.pm   92.5   84.6    n/a  100.0   66.7    0.6   90.0
1173         ...erator/File/BucketList.pm  100.0   75.0    n/a  100.0   66.7    0.4   93.8
1174         ...ep/Iterator/File/Index.pm  100.0  100.0    n/a  100.0  100.0    0.2  100.0
1175         blib/lib/DBM/Deep/Null.pm      87.5    n/a    n/a   75.0    n/a    0.0   83.3
1176         blib/lib/DBM/Deep/Sector.pm    91.7    n/a    n/a   83.3    0.0    6.7   74.4
1177         ...ib/DBM/Deep/Sector/DBI.pm   96.8   83.3    n/a  100.0    0.0    1.0   89.8
1178         ...p/Sector/DBI/Reference.pm  100.0   95.5  100.0  100.0    0.0    2.2   91.2
1179         ...Deep/Sector/DBI/Scalar.pm  100.0  100.0    n/a  100.0    0.0    1.1   92.9
1180         ...b/DBM/Deep/Sector/File.pm   96.0   87.5  100.0   92.3   25.0    2.2   91.0
1181         ...Sector/File/BucketList.pm   98.2   85.7   83.3  100.0    0.0    3.3   89.4
1182         .../Deep/Sector/File/Data.pm  100.0    n/a    n/a  100.0    0.0    0.1   90.9
1183         ...Deep/Sector/File/Index.pm  100.0   80.0   33.3  100.0    0.0    0.8   83.1
1184         .../Deep/Sector/File/Null.pm  100.0  100.0    n/a  100.0    0.0    0.0   91.7
1185         .../Sector/File/Reference.pm  100.0   90.0   80.0  100.0    0.0    1.4   91.5
1186         ...eep/Sector/File/Scalar.pm   98.4   87.5    n/a  100.0    0.0    0.8   91.9
1187         blib/lib/DBM/Deep/Storage.pm  100.0    n/a    n/a  100.0  100.0    0.0  100.0
1188         ...b/DBM/Deep/Storage/DBI.pm   97.3   70.8    n/a  100.0   38.5    6.7   87.0
1189         .../DBM/Deep/Storage/File.pm   96.6   77.1   80.0   95.7  100.0   16.0   91.8
1190         Total                          99.3   85.2   84.9   99.8   63.3  100.0   97.6
1191         ---------------------------- ------ ------ ------ ------ ------ ------ ------
1192

MORE INFORMATION

1194       Check out the DBM::Deep Google Group at
1195       <http://groups.google.com/group/DBM-Deep> or send email to
1196       DBM-Deep@googlegroups.com <mailto:DBM-Deep@googlegroups.com>.  You can
1197       also visit #dbm-deep on irc.perl.org
1198
1199       The source code repository is at <http://github.com/robkinyon/dbm-deep>
1200

MAINTAINERS

1202       Rob Kinyon, rkinyon@cpan.org <mailto:rkinyon@cpan.org>
1203
1204       Originally written by Joseph Huckaby, jhuckaby@cpan.org
1205       <mailto:jhuckaby@cpan.org>
1206

SPONSORS

1208       Stonehenge Consulting (<http://www.stonehenge.com/>) sponsored the
1209       development of transactions and freespace management, leading to the
1210       1.0000 release. A great debt of gratitude goes out to them for their
1211       continuing leadership in and support of the Perl community.
1212

CONTRIBUTORS

1214       The following have contributed greatly to make DBM::Deep what it is
1215       today:
1216
1217       ·   Adam Sah and Rich Gaushell for innumerable contributions early on.
1218
1219       ·   Dan Golden and others at YAPC::NA 2006 for helping me design
1220           through transactions.
1221
1222       ·   James Stanley for bug fix
1223
1224       ·   David Steinbrunner for fixing typos and adding repository cpan
1225           metadata
1226
1227       ·   H. Merijn Brandt for fixing the POD escapes.
1228
1229       ·   Breno G. de Oliveira for minor packaging tweaks
1230

SEE ALSO

1232       DBM::Deep::Cookbook(3)
1233
1234       perltie(1), Tie::Hash(3), Fcntl(3), flock(2), lockf(3), nfs(5)
1235

LICENSE

1237       Copyright (c) 2007-14 Rob Kinyon. All Rights Reserved.  This is free
1238       software, you may use it and distribute it under the same terms as Perl
1239       itself.
1240
1241
1242
1243perl v5.28.1                      2019-02-02                      DBM::Deep(3)
Impressum