1DBIx::Class::Storage::DUBsIe:r:RCeopnltirciabtuetde(d3D)PBeIrxl::DColcausmse:n:tSattoiroange::DBI::Replicated(3)
2
3
4

NAME

6       DBIx::Class::Storage::DBI::Replicated - BETA Replicated database
7       support
8

SYNOPSIS

10       The Following example shows how to change an existing $schema to a
11       replicated storage type, add some replicated (read-only) databases, and
12       perform reporting tasks.
13
14       You should set the 'storage_type attribute to a replicated type.  You
15       should also define your arguments, such as which balancer you want and
16       any arguments that the Pool object should get.
17
18         my $schema = Schema::Class->clone;
19         $schema->storage_type( ['::DBI::Replicated', {balancer=>'::Random'}] );
20         $schema->connection(...);
21
22       Next, you need to add in the Replicants.  Basically this is an array of
23       arrayrefs, where each arrayref is database connect information.  Think
24       of these arguments as what you'd pass to the 'normal' $schema->connect
25       method.
26
27         $schema->storage->connect_replicants(
28           [$dsn1, $user, $pass, \%opts],
29           [$dsn2, $user, $pass, \%opts],
30           [$dsn3, $user, $pass, \%opts],
31         );
32
33       Now, just use the $schema as you normally would.  Automatically all
34       reads will be delegated to the replicants, while writes to the master.
35
36         $schema->resultset('Source')->search({name=>'etc'});
37
38       You can force a given query to use a particular storage using the
39       search attribute 'force_pool'.  For example:
40
41         my $RS = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
42
43       Now $RS will force everything (both reads and writes) to use whatever
44       was setup as the master storage.  'master' is hardcoded to always point
45       to the Master, but you can also use any Replicant name.  Please see:
46       DBIx::Class::Storage::DBI::Replicated::Pool and the replicants
47       attribute for more.
48
49       Also see transactions and "execute_reliably" for alternative ways to
50       force read traffic to the master.  In general, you should wrap your
51       statements in a transaction when you are reading and writing to the
52       same tables at the same time, since your replicants will often lag a
53       bit behind the master.
54
55       See DBIx::Class::Storage::DBI::Replicated::Instructions for more help
56       and walkthroughs.
57

DESCRIPTION

59       Warning: This class is marked BETA.  This has been running a production
60       website using MySQL native replication as its backend and we have some
61       decent test coverage but the code hasn't yet been stressed by a variety
62       of databases.  Individual DBs may have quirks we are not aware of.
63       Please use this in first development and pass along your
64       experiences/bug fixes.
65
66       This class implements replicated data store for DBI. Currently you can
67       define one master and numerous slave database connections. All write-
68       type queries (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are
69       routed to master database, all read-type queries (SELECTs) go to the
70       slave database.
71
72       Basically, any method request that DBIx::Class::Storage::DBI would
73       normally handle gets delegated to one of the two attributes:
74       "read_handler" or to "write_handler".  Additionally, some methods need
75       to be distributed to all existing storages.  This way our storage class
76       is a drop in replacement for DBIx::Class::Storage::DBI.
77
78       Read traffic is spread across the replicants (slaves) occurring to a
79       user selected algorithm.  The default algorithm is random weighted.
80

NOTES

82       The consistency between master and replicants is database specific.
83       The Pool gives you a method to validate its replicants, removing and
84       replacing them when they fail/pass predefined criteria.  Please make
85       careful use of the ways to force a query to run against Master when
86       needed.
87

REQUIREMENTS

89       Replicated Storage has additional requirements not currently part of
90       DBIx::Class. See DBIx::Class::Optional::Dependencies for more details.
91

ATTRIBUTES

93       This class defines the following attributes.
94
95   schema
96       The underlying DBIx::Class::Schema object this storage is attaching
97
98   pool_type
99       Contains the classname which will instantiate the "pool" object.
100       Defaults to: DBIx::Class::Storage::DBI::Replicated::Pool.
101
102   pool_args
103       Contains a hashref of initialized information to pass to the Balancer
104       object.  See DBIx::Class::Storage::DBI::Replicated::Pool for available
105       arguments.
106
107   balancer_type
108       The replication pool requires a balance class to provider the methods
109       for choose how to spread the query load across each replicant in the
110       pool.
111
112   balancer_args
113       Contains a hashref of initialized information to pass to the Balancer
114       object.  See DBIx::Class::Storage::DBI::Replicated::Balancer for
115       available arguments.
116
117   pool
118       Is a <DBIx::Class::Storage::DBI::Replicated::Pool> or derived class.
119       This is a container class for one or more replicated databases.
120
121   balancer
122       Is a <DBIx::Class::Storage::DBI::Replicated::Balancer> or derived
123       class.  This is a class that takes a pool
124       (<DBIx::Class::Storage::DBI::Replicated::Pool>)
125
126   master
127       The master defines the canonical state for a pool of connected
128       databases.  All the replicants are expected to match this databases
129       state.  Thus, in a classic Master / Slaves distributed system, all the
130       slaves are expected to replicate the Master's state as quick as
131       possible.  This is the only database in the pool of databases that is
132       allowed to handle write traffic.
133

ATTRIBUTES IMPLEMENTING THE DBIx::Storage::DBI INTERFACE

135       The following methods are delegated all the methods required for the
136       DBIx::Class::Storage::DBI interface.
137
138   read_handler
139       Defines an object that implements the read side of
140       BIx::Class::Storage::DBI.
141
142   write_handler
143       Defines an object that implements the write side of
144       BIx::Class::Storage::DBI, as well as methods that don't write or read
145       that can be called on only one storage, methods that return a $dbh, and
146       any methods that don't make sense to run on a replicant.
147
148   around: connect_info
149       Preserves master's "connect_info" options (for merging with
150       replicants.)  Also sets any Replicated-related options from
151       connect_info, such as "pool_type", "pool_args", "balancer_type" and
152       "balancer_args".
153

METHODS

155       This class defines the following methods.
156
157   BUILDARGS
158       DBIx::Class::Schema when instantiating its storage passed itself as the
159       first argument.  So we need to massage the arguments a bit so that all
160       the bits get put into the correct places.
161
162   _build_master
163       Lazy builder for the "master" attribute.
164
165   _build_pool
166       Lazy builder for the "pool" attribute.
167
168   _build_balancer
169       Lazy builder for the "balancer" attribute.  This takes a Pool object so
170       that the balancer knows which pool it's balancing.
171
172   _build_write_handler
173       Lazy builder for the "write_handler" attribute.  The default is to set
174       this to the "master".
175
176   _build_read_handler
177       Lazy builder for the "read_handler" attribute.  The default is to set
178       this to the "balancer".
179
180   around: connect_replicants
181       All calls to connect_replicants needs to have an existing $schema
182       tacked onto top of the args, since DBIx::Storage::DBI needs it, and any
183       "connect_info" options merged with the master, with replicant opts
184       having higher priority.
185
186   all_storages
187       Returns an array of of all the connected storage backends.  The first
188       element in the returned array is the master, and the remainings are
189       each of the replicants.
190
191   execute_reliably ($coderef, ?@args)
192       Given a coderef, saves the current state of the "read_handler", forces
193       it to use reliable storage (e.g. sets it to the master), executes a
194       coderef and then restores the original state.
195
196       Example:
197
198         my $reliably = sub {
199           my $name = shift @_;
200           $schema->resultset('User')->create({name=>$name});
201           my $user_rs = $schema->resultset('User')->find({name=>$name});
202           return $user_rs;
203         };
204
205         my $user_rs = $schema->storage->execute_reliably($reliably, 'John');
206
207       Use this when you must be certain of your database state, such as when
208       you just inserted something and need to get a resultset including it,
209       etc.
210
211   set_reliable_storage
212       Sets the current $schema to be 'reliable', that is all queries, both
213       read and write are sent to the master
214
215   set_balanced_storage
216       Sets the current $schema to be use the </balancer> for all reads, while
217       all writes are sent to the master only
218
219   connected
220       Check that the master and at least one of the replicants is connected.
221
222   ensure_connected
223       Make sure all the storages are connected.
224
225   limit_dialect
226       Set the limit_dialect for all existing storages
227
228   quote_char
229       Set the quote_char for all existing storages
230
231   name_sep
232       Set the name_sep for all existing storages
233
234   set_schema
235       Set the schema object for all existing storages
236
237   debug
238       set a debug flag across all storages
239
240   debugobj
241       set a debug object
242
243   debugfh
244       set a debugfh object
245
246   debugcb
247       set a debug callback
248
249   disconnect
250       disconnect everything
251
252   cursor_class
253       set cursor class on all storages, or return master's
254
255   cursor
256       set cursor class on all storages, or return master's, alias for
257       "cursor_class" above.
258
259   unsafe
260       sets the "unsafe" in DBIx::Class::Storage::DBI option on all storages
261       or returns master's current setting
262
263   disable_sth_caching
264       sets the "disable_sth_caching" in DBIx::Class::Storage::DBI option on
265       all storages or returns master's current setting
266
267   lag_behind_master
268       returns the highest Replicant "lag_behind_master" in
269       DBIx::Class::Storage::DBI setting
270
271   is_replicating
272       returns true if all replicants return true for "is_replicating" in
273       DBIx::Class::Storage::DBI
274
275   connect_call_datetime_setup
276       calls "connect_call_datetime_setup" in DBIx::Class::Storage::DBI for
277       all storages
278

GOTCHAS

280       Due to the fact that replicants can lag behind a master, you must take
281       care to make sure you use one of the methods to force read queries to a
282       master should you need realtime data integrity.  For example, if you
283       insert a row, and then immediately re-read it from the database (say,
284       by doing $row->discard_changes) or you insert a row and then
285       immediately build a query that expects that row to be an item, you
286       should force the master to handle reads.  Otherwise, due to the lag,
287       there is no certainty your data will be in the expected state.
288
289       For data integrity, all transactions automatically use the master
290       storage for all read and write queries.  Using a transaction is the
291       preferred and recommended method to force the master to handle all read
292       queries.
293
294       Otherwise, you can force a single query to use the master with the
295       'force_pool' attribute:
296
297         my $row = $resultset->search(undef, {force_pool=>'master'})->find($pk);
298
299       This attribute will safely be ignore by non replicated storages, so you
300       can use the same code for both types of systems.
301
302       Lastly, you can use the "execute_reliably" method, which works very
303       much like a transaction.
304
305       For debugging, you can turn replication on/off with the methods
306       "set_reliable_storage" and "set_balanced_storage", however this
307       operates at a global level and is not suitable if you have a shared
308       Schema object being used by multiple processes, such as on a web
309       application server.  You can get around this limitation by using the
310       Schema clone method.
311
312         my $new_schema = $schema->clone;
313         $new_schema->set_reliable_storage;
314
315         ## $new_schema will use only the Master storage for all reads/writes while
316         ## the $schema object will use replicated storage.
317

AUTHOR

319         John Napiorkowski <john.napiorkowski@takkle.com>
320
321       Based on code originated by:
322
323         Norbert CsongrA~Xdi <bert@cpan.org>
324         Peter SiklA~Xsi <einon@einon.hu>
325

LICENSE

327       You may distribute this code under the same terms as Perl itself.
328
329
330
331perl v5.12.0                      2010-D0B5I-x1:2:Class::Storage::DBI::Replicated(3)
Impressum