1DBIx::Class::Storage::DUBsIe:r:RCeopnltirciabtuetde(d3D)PBeIrxl::DColcausmse:n:tSattoiroange::DBI::Replicated(3)
2
3
4

NAME

6       DBIx::Class::Storage::DBI::Replicated - BETA Replicated database
7       support
8

SYNOPSIS

10       The Following example shows how to change an existing $schema to a
11       replicated storage type, add some replicated (read-only) databases, and
12       perform reporting tasks.
13
14       You should set the 'storage_type attribute to a replicated type.  You
15       should also define your arguments, such as which balancer you want and
16       any arguments that the Pool object should get.
17
18         my $schema = Schema::Class->clone;
19         $schema->storage_type(['::DBI::Replicated', { balancer_type => '::Random' }]);
20         $schema->connection(...);
21
22       Next, you need to add in the Replicants.  Basically this is an array of
23       arrayrefs, where each arrayref is database connect information.  Think
24       of these arguments as what you'd pass to the 'normal' $schema->connect
25       method.
26
27         $schema->storage->connect_replicants(
28           [$dsn1, $user, $pass, \%opts],
29           [$dsn2, $user, $pass, \%opts],
30           [$dsn3, $user, $pass, \%opts],
31         );
32
33       Now, just use the $schema as you normally would.  Automatically all
34       reads will be delegated to the replicants, while writes to the master.
35
36         $schema->resultset('Source')->search({name=>'etc'});
37
38       You can force a given query to use a particular storage using the
39       search attribute 'force_pool'.  For example:
40
41         my $rs = $schema->resultset('Source')->search(undef, {force_pool=>'master'});
42
43       Now $rs will force everything (both reads and writes) to use whatever
44       was setup as the master storage.  'master' is hardcoded to always point
45       to the Master, but you can also use any Replicant name.  Please see:
46       DBIx::Class::Storage::DBI::Replicated::Pool and the replicants
47       attribute for more.
48
49       Also see transactions and "execute_reliably" for alternative ways to
50       force read traffic to the master.  In general, you should wrap your
51       statements in a transaction when you are reading and writing to the
52       same tables at the same time, since your replicants will often lag a
53       bit behind the master.
54
55       If you have a multi-statement read only transaction you can force it to
56       select a random server in the pool by:
57
58         my $rs = $schema->resultset('Source')->search( undef,
59           { force_pool => $db->storage->read_handler->next_storage }
60         );
61

DESCRIPTION

63       Warning: This class is marked BETA.  This has been running a production
64       website using MySQL native replication as its backend and we have some
65       decent test coverage but the code hasn't yet been stressed by a variety
66       of databases.  Individual DBs may have quirks we are not aware of.
67       Please use this in first development and pass along your
68       experiences/bug fixes.
69
70       This class implements replicated data store for DBI. Currently you can
71       define one master and numerous slave database connections. All write-
72       type queries (INSERT, UPDATE, DELETE and even LAST_INSERT_ID) are
73       routed to master database, all read-type queries (SELECTs) go to the
74       slave database.
75
76       Basically, any method request that DBIx::Class::Storage::DBI would
77       normally handle gets delegated to one of the two attributes:
78       "read_handler" or to "write_handler".  Additionally, some methods need
79       to be distributed to all existing storages.  This way our storage class
80       is a drop in replacement for DBIx::Class::Storage::DBI.
81
82       Read traffic is spread across the replicants (slaves) occurring to a
83       user selected algorithm.  The default algorithm is random weighted.
84

NOTES

86       The consistency between master and replicants is database specific.
87       The Pool gives you a method to validate its replicants, removing and
88       replacing them when they fail/pass predefined criteria.  Please make
89       careful use of the ways to force a query to run against Master when
90       needed.
91

REQUIREMENTS

93       Replicated Storage has additional requirements not currently part of
94       DBIx::Class. See DBIx::Class::Optional::Dependencies for more details.
95

ATTRIBUTES

97       This class defines the following attributes.
98
99   schema
100       The underlying DBIx::Class::Schema object this storage is attaching
101
102   pool_type
103       Contains the classname which will instantiate the "pool" object.
104       Defaults to: DBIx::Class::Storage::DBI::Replicated::Pool.
105
106   pool_args
107       Contains a hashref of initialized information to pass to the Balancer
108       object.  See DBIx::Class::Storage::DBI::Replicated::Pool for available
109       arguments.
110
111   balancer_type
112       The replication pool requires a balance class to provider the methods
113       for choose how to spread the query load across each replicant in the
114       pool.
115
116   balancer_args
117       Contains a hashref of initialized information to pass to the Balancer
118       object.  See DBIx::Class::Storage::DBI::Replicated::Balancer for
119       available arguments.
120
121   pool
122       Is a DBIx::Class::Storage::DBI::Replicated::Pool or derived class.
123       This is a container class for one or more replicated databases.
124
125   balancer
126       Is a DBIx::Class::Storage::DBI::Replicated::Balancer or derived class.
127       This is a class that takes a pool
128       (DBIx::Class::Storage::DBI::Replicated::Pool)
129
130   master
131       The master defines the canonical state for a pool of connected
132       databases.  All the replicants are expected to match this databases
133       state.  Thus, in a classic Master / Slaves distributed system, all the
134       slaves are expected to replicate the Master's state as quick as
135       possible.  This is the only database in the pool of databases that is
136       allowed to handle write traffic.
137

ATTRIBUTES IMPLEMENTING THE DBIx::Storage::DBI INTERFACE

139       The following methods are delegated all the methods required for the
140       DBIx::Class::Storage::DBI interface.
141
142   read_handler
143       Defines an object that implements the read side of
144       DBIx::Class::Storage::DBI.
145
146   write_handler
147       Defines an object that implements the write side of
148       DBIx::Class::Storage::DBI, as well as methods that don't write or read
149       that can be called on only one storage, methods that return a $dbh, and
150       any methods that don't make sense to run on a replicant.
151
152   around: connect_info
153       Preserves master's "connect_info" options (for merging with
154       replicants.)  Also sets any Replicated-related options from
155       connect_info, such as "pool_type", "pool_args", "balancer_type" and
156       "balancer_args".
157

METHODS

159       This class defines the following methods.
160
161   BUILDARGS
162       DBIx::Class::Schema when instantiating its storage passed itself as the
163       first argument.  So we need to massage the arguments a bit so that all
164       the bits get put into the correct places.
165
166   _build_master
167       Lazy builder for the "master" attribute.
168
169   _build_pool
170       Lazy builder for the "pool" attribute.
171
172   _build_balancer
173       Lazy builder for the "balancer" attribute.  This takes a Pool object so
174       that the balancer knows which pool it's balancing.
175
176   _build_write_handler
177       Lazy builder for the "write_handler" attribute.  The default is to set
178       this to the "master".
179
180   _build_read_handler
181       Lazy builder for the "read_handler" attribute.  The default is to set
182       this to the "balancer".
183
184   around: connect_replicants
185       All calls to connect_replicants needs to have an existing $schema
186       tacked onto top of the args, since DBIx::Class::Storage::DBI needs it,
187       and any connect_info options merged with the master, with replicant
188       opts having higher priority.
189
190   all_storages
191       Returns an array of all the connected storage backends.  The first
192       element in the returned array is the master, and the rest are each of
193       the replicants.
194
195   execute_reliably ($coderef, ?@args)
196       Given a coderef, saves the current state of the "read_handler", forces
197       it to use reliable storage (e.g. sets it to the master), executes a
198       coderef and then restores the original state.
199
200       Example:
201
202         my $reliably = sub {
203           my $name = shift @_;
204           $schema->resultset('User')->create({name=>$name});
205           my $user_rs = $schema->resultset('User')->find({name=>$name});
206           return $user_rs;
207         };
208
209         my $user_rs = $schema->storage->execute_reliably($reliably, 'John');
210
211       Use this when you must be certain of your database state, such as when
212       you just inserted something and need to get a resultset including it,
213       etc.
214
215   set_reliable_storage
216       Sets the current $schema to be 'reliable', that is all queries, both
217       read and write are sent to the master
218
219   set_balanced_storage
220       Sets the current $schema to be use the </balancer> for all reads, while
221       all writes are sent to the master only
222
223   connected
224       Check that the master and at least one of the replicants is connected.
225
226   ensure_connected
227       Make sure all the storages are connected.
228
229   limit_dialect
230       Set the limit_dialect for all existing storages
231
232   quote_char
233       Set the quote_char for all existing storages
234
235   name_sep
236       Set the name_sep for all existing storages
237
238   set_schema
239       Set the schema object for all existing storages
240
241   debug
242       set a debug flag across all storages
243
244   debugobj
245       set a debug object
246
247   debugfh
248       set a debugfh object
249
250   debugcb
251       set a debug callback
252
253   disconnect
254       disconnect everything
255
256   cursor_class
257       set cursor class on all storages, or return master's
258
259   cursor
260       set cursor class on all storages, or return master's, alias for
261       "cursor_class" above.
262
263   unsafe
264       sets the "unsafe" in DBIx::Class::Storage::DBI option on all storages
265       or returns master's current setting
266
267   disable_sth_caching
268       sets the "disable_sth_caching" in DBIx::Class::Storage::DBI option on
269       all storages or returns master's current setting
270
271   lag_behind_master
272       returns the highest Replicant "lag_behind_master" in
273       DBIx::Class::Storage::DBI setting
274
275   is_replicating
276       returns true if all replicants return true for "is_replicating" in
277       DBIx::Class::Storage::DBI
278
279   connect_call_datetime_setup
280       calls "connect_call_datetime_setup" in DBIx::Class::Storage::DBI for
281       all storages
282
283   connect_call_rebase_sqlmaker
284       calls "connect_call_rebase_sqlmaker" in DBIx::Class::Storage::DBI for
285       all storages
286

GOTCHAS

288       Due to the fact that replicants can lag behind a master, you must take
289       care to make sure you use one of the methods to force read queries to a
290       master should you need realtime data integrity.  For example, if you
291       insert a row, and then immediately re-read it from the database (say,
292       by doing $result->discard_changes) or you insert a row and then
293       immediately build a query that expects that row to be an item, you
294       should force the master to handle reads.  Otherwise, due to the lag,
295       there is no certainty your data will be in the expected state.
296
297       For data integrity, all transactions automatically use the master
298       storage for all read and write queries.  Using a transaction is the
299       preferred and recommended method to force the master to handle all read
300       queries.
301
302       Otherwise, you can force a single query to use the master with the
303       'force_pool' attribute:
304
305         my $result = $resultset->search(undef, {force_pool=>'master'})->find($pk);
306
307       This attribute will safely be ignored by non replicated storages, so
308       you can use the same code for both types of systems.
309
310       Lastly, you can use the "execute_reliably" method, which works very
311       much like a transaction.
312
313       For debugging, you can turn replication on/off with the methods
314       "set_reliable_storage" and "set_balanced_storage", however this
315       operates at a global level and is not suitable if you have a shared
316       Schema object being used by multiple processes, such as on a web
317       application server.  You can get around this limitation by using the
318       Schema clone method.
319
320         my $new_schema = $schema->clone;
321         $new_schema->set_reliable_storage;
322
323         ## $new_schema will use only the Master storage for all reads/writes while
324         ## the $schema object will use replicated storage.
325

FURTHER QUESTIONS?

327       Check the list of additional DBIC resources.
328
330       This module is free software copyright by the DBIx::Class (DBIC)
331       authors. You can redistribute it and/or modify it under the same terms
332       as the DBIx::Class library.
333
334
335
336perl v5.32.0                      2020-D0B7I-x2:8:Class::Storage::DBI::Replicated(3)
Impressum