1DBIx::Class::Manual::CoUoskebrooCko(n3t)ributed Perl DocDuBmIexn:t:aCtliaosns::Manual::Cookbook(3)
2
3
4
6 DBIx::Class::Manual::Cookbook - Miscellaneous recipes
7
9 Paged results
10 When you expect a large number of results, you can ask DBIx::Class for
11 a paged resultset, which will fetch only a defined number of records at
12 a time:
13
14 my $rs = $schema->resultset('Artist')->search(
15 undef,
16 {
17 page => 1, # page to return (defaults to 1)
18 rows => 10, # number of results per page
19 },
20 );
21
22 return $rs->all(); # all records for page 1
23
24 return $rs->page(2); # records for page 2
25
26 You can get a DBIx::Class::ResultSet::Pager object for the resultset
27 (suitable for use in e.g. a template) using the "pager" method:
28
29 return $rs->pager();
30
31 Complex WHERE clauses
32 Sometimes you need to formulate a query using specific operators:
33
34 my @albums = $schema->resultset('Album')->search({
35 artist => { 'like', '%Lamb%' },
36 title => { 'like', '%Fear of Fours%' },
37 });
38
39 This results in something like the following "WHERE" clause:
40
41 WHERE artist LIKE ? AND title LIKE ?
42
43 And the following bind values for the placeholders: '%Lamb%', '%Fear of
44 Fours%'.
45
46 Other queries might require slightly more complex logic:
47
48 my @albums = $schema->resultset('Album')->search({
49 -or => [
50 -and => [
51 artist => { 'like', '%Smashing Pumpkins%' },
52 title => 'Siamese Dream',
53 ],
54 artist => 'Starchildren',
55 ],
56 });
57
58 This results in the following "WHERE" clause:
59
60 WHERE ( artist LIKE '%Smashing Pumpkins%' AND title = 'Siamese Dream' )
61 OR artist = 'Starchildren'
62
63 For more information on generating complex queries, see "WHERE CLAUSES"
64 in SQL::Abstract::Classic.
65
66 Retrieve one and only one row from a resultset
67 Sometimes you need only the first "top" row of a resultset. While this
68 can be easily done with $rs->first, it is suboptimal, as a full blown
69 cursor for the resultset will be created and then immediately destroyed
70 after fetching the first row object. $rs->single is designed
71 specifically for this case - it will grab the first returned result
72 without even instantiating a cursor.
73
74 Before replacing all your calls to "first()" with "single()" please
75 observe the following CAVEATS:
76
77 • While single() takes a search condition just like search() does, it
78 does _not_ accept search attributes. However one can always chain a
79 single() to a search():
80
81 my $top_cd = $cd_rs->search({}, { order_by => 'rating' })->single;
82
83 • Since single() is the engine behind find(), it is designed to fetch
84 a single row per database query. Thus a warning will be issued when
85 the underlying SELECT returns more than one row. Sometimes however
86 this usage is valid: i.e. we have an arbitrary number of cd's but
87 only one of them is at the top of the charts at any given time. If
88 you know what you are doing, you can silence the warning by
89 explicitly limiting the resultset size:
90
91 my $top_cd = $cd_rs->search ({}, { order_by => 'rating', rows => 1 })->single;
92
93 Arbitrary SQL through a custom ResultSource
94 Sometimes you have to run arbitrary SQL because your query is too
95 complex (e.g. it contains Unions, Sub-Selects, Stored Procedures, etc.)
96 or has to be optimized for your database in a special way, but you
97 still want to get the results as a DBIx::Class::ResultSet.
98
99 This is accomplished by defining a ResultSource::View for your query,
100 almost like you would define a regular ResultSource.
101
102 package My::Schema::Result::UserFriendsComplex;
103 use strict;
104 use warnings;
105 use base qw/DBIx::Class::Core/;
106
107 __PACKAGE__->table_class('DBIx::Class::ResultSource::View');
108
109 # For the time being this is necessary even for virtual views
110 __PACKAGE__->table($view_name);
111
112 #
113 # ->add_columns, etc.
114 #
115
116 # do not attempt to deploy() this view
117 __PACKAGE__->result_source_instance->is_virtual(1);
118
119 __PACKAGE__->result_source_instance->view_definition(q[
120 SELECT u.* FROM user u
121 INNER JOIN user_friends f ON u.id = f.user_id
122 WHERE f.friend_user_id = ?
123 UNION
124 SELECT u.* FROM user u
125 INNER JOIN user_friends f ON u.id = f.friend_user_id
126 WHERE f.user_id = ?
127 ]);
128
129 Next, you can execute your complex query using bind parameters like
130 this:
131
132 my $friends = $schema->resultset( 'UserFriendsComplex' )->search( {},
133 {
134 bind => [ 12345, 12345 ]
135 }
136 );
137
138 ... and you'll get back a perfect DBIx::Class::ResultSet (except, of
139 course, that you cannot modify the rows it contains, e.g. cannot call
140 update or delete on it).
141
142 Note that you cannot have bind parameters unless is_virtual is set to
143 true.
144
145 • NOTE
146
147 If you're using the old deprecated "$rsrc_instance->name(\'( SELECT
148 ...')" method for custom SQL execution, you are highly encouraged
149 to update your code to use a virtual view as above. If you do not
150 want to change your code, and just want to suppress the deprecation
151 warning when you call "deploy" in DBIx::Class::Schema, add this
152 line to your source definition, so that "deploy" will exclude this
153 "table":
154
155 sub sqlt_deploy_hook { $_[1]->schema->drop_table ($_[1]) }
156
157 Using specific columns
158 When you only want specific columns from a table, you can use "columns"
159 to specify which ones you need. This is useful to avoid loading columns
160 with large amounts of data that you aren't about to use anyway:
161
162 my $rs = $schema->resultset('Artist')->search(
163 undef,
164 {
165 columns => [qw/ name /]
166 }
167 );
168
169 # Equivalent SQL:
170 # SELECT artist.name FROM artist
171
172 This is a shortcut for "select" and "as", see below. "columns" cannot
173 be used together with "select" and "as".
174
175 Using database functions or stored procedures
176 The combination of "select" and "as" can be used to return the result
177 of a database function or stored procedure as a column value. You use
178 "select" to specify the source for your column value (e.g. a column
179 name, function, or stored procedure name). You then use "as" to set the
180 column name you will use to access the returned value:
181
182 my $rs = $schema->resultset('Artist')->search(
183 {},
184 {
185 select => [ 'name', { LENGTH => 'name' } ],
186 as => [qw/ name name_length /],
187 }
188 );
189
190 # Equivalent SQL:
191 # SELECT name name, LENGTH( name )
192 # FROM artist
193
194 Note that the "as" attribute has absolutely nothing to do with the SQL
195 syntax " SELECT foo AS bar " (see the documentation in "ATTRIBUTES" in
196 DBIx::Class::ResultSet). You can control the "AS" part of the generated
197 SQL via the "-as" field attribute as follows:
198
199 my $rs = $schema->resultset('Artist')->search(
200 {},
201 {
202 join => 'cds',
203 distinct => 1,
204 '+select' => [ { count => 'cds.cdid', -as => 'amount_of_cds' } ],
205 '+as' => [qw/num_cds/],
206 order_by => { -desc => 'amount_of_cds' },
207 }
208 );
209
210 # Equivalent SQL
211 # SELECT me.artistid, me.name, me.rank, me.charfield, COUNT( cds.cdid ) AS amount_of_cds
212 # FROM artist me LEFT JOIN cd cds ON cds.artist = me.artistid
213 # GROUP BY me.artistid, me.name, me.rank, me.charfield
214 # ORDER BY amount_of_cds DESC
215
216 If your alias exists as a column in your base class (i.e. it was added
217 with add_columns), you just access it as normal. Our "Artist" class has
218 a "name" column, so we just use the "name" accessor:
219
220 my $artist = $rs->first();
221 my $name = $artist->name();
222
223 If on the other hand the alias does not correspond to an existing
224 column, you have to fetch the value using the "get_column" accessor:
225
226 my $name_length = $artist->get_column('name_length');
227
228 If you don't like using "get_column", you can always create an accessor
229 for any of your aliases using either of these:
230
231 # Define accessor manually:
232 sub name_length { shift->get_column('name_length'); }
233
234 # Or use DBIx::Class::AccessorGroup:
235 __PACKAGE__->mk_group_accessors('column' => 'name_length');
236
237 See also "Using SQL functions on the left hand side of a comparison".
238
239 SELECT DISTINCT with multiple columns
240 my $rs = $schema->resultset('Artist')->search(
241 {},
242 {
243 columns => [ qw/artist_id name rank/ ],
244 distinct => 1
245 }
246 );
247
248 my $rs = $schema->resultset('Artist')->search(
249 {},
250 {
251 columns => [ qw/artist_id name rank/ ],
252 group_by => [ qw/artist_id name rank/ ],
253 }
254 );
255
256 # Equivalent SQL:
257 # SELECT me.artist_id, me.name, me.rank
258 # FROM artist me
259 # GROUP BY artist_id, name, rank
260
261 SELECT COUNT(DISTINCT colname)
262 my $rs = $schema->resultset('Artist')->search(
263 {},
264 {
265 columns => [ qw/name/ ],
266 distinct => 1
267 }
268 );
269
270 my $rs = $schema->resultset('Artist')->search(
271 {},
272 {
273 columns => [ qw/name/ ],
274 group_by => [ qw/name/ ],
275 }
276 );
277
278 my $count = $rs->count;
279
280 # Equivalent SQL:
281 # SELECT COUNT( * ) FROM (SELECT me.name FROM artist me GROUP BY me.name) me:
282
283 Grouping results
284 DBIx::Class supports "GROUP BY" as follows:
285
286 my $rs = $schema->resultset('Artist')->search(
287 {},
288 {
289 join => [qw/ cds /],
290 select => [ 'name', { count => 'cds.id' } ],
291 as => [qw/ name cd_count /],
292 group_by => [qw/ name /]
293 }
294 );
295
296 # Equivalent SQL:
297 # SELECT name, COUNT( cd.id ) FROM artist
298 # LEFT JOIN cd ON artist.id = cd.artist
299 # GROUP BY name
300
301 Please see "ATTRIBUTES" in DBIx::Class::ResultSet documentation if you
302 are in any way unsure about the use of the attributes above (" join ",
303 " select ", " as " and " group_by ").
304
305 Subqueries
306 You can write subqueries relatively easily in DBIC.
307
308 my $inside_rs = $schema->resultset('Artist')->search({
309 name => [ 'Billy Joel', 'Brittany Spears' ],
310 });
311
312 my $rs = $schema->resultset('CD')->search({
313 artist_id => { -in => $inside_rs->get_column('id')->as_query },
314 });
315
316 The usual operators ( '=', '!=', -in, -not_in, etc.) are supported.
317
318 NOTE: You have to explicitly use '=' when doing an equality comparison.
319 The following will not work:
320
321 my $rs = $schema->resultset('CD')->search({
322 artist_id => $inside_rs->get_column('id')->as_query, # does NOT work
323 });
324
325 Support
326
327 Subqueries are supported in the where clause (first hashref), and in
328 the from, select, and +select attributes.
329
330 Correlated subqueries
331
332 my $cdrs = $schema->resultset('CD');
333 my $rs = $cdrs->search({
334 year => {
335 '=' => $cdrs->search(
336 { artist_id => { -ident => 'me.artist_id' } },
337 { alias => 'sub_query' }
338 )->get_column('year')->max_rs->as_query,
339 },
340 });
341
342 That creates the following SQL:
343
344 SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track
345 FROM cd me
346 WHERE year = (
347 SELECT MAX(sub_query.year)
348 FROM cd sub_query
349 WHERE artist_id = me.artist_id
350 )
351
352 Predefined searches
353 You can define frequently used searches as methods by subclassing
354 DBIx::Class::ResultSet:
355
356 package My::DBIC::ResultSet::CD;
357 use strict;
358 use warnings;
359 use base 'DBIx::Class::ResultSet';
360
361 sub search_cds_ordered {
362 my ($self) = @_;
363
364 return $self->search(
365 {},
366 { order_by => 'name DESC' },
367 );
368 }
369
370 1;
371
372 If you're using "load_namespaces" in DBIx::Class::Schema, simply place
373 the file into the "ResultSet" directory next to your "Result"
374 directory, and it will be automatically loaded.
375
376 If however you are still using "load_classes" in DBIx::Class::Schema,
377 first tell DBIx::Class to create an instance of the ResultSet class for
378 you, in your My::DBIC::Schema::CD class:
379
380 # class definition as normal
381 use base 'DBIx::Class::Core';
382 __PACKAGE__->table('cd');
383
384 # tell DBIC to use the custom ResultSet class
385 __PACKAGE__->resultset_class('My::DBIC::ResultSet::CD');
386
387 Note that "resultset_class" must be called after "load_components" and
388 "table", or you will get errors about missing methods.
389
390 Then call your new method in your code:
391
392 my $ordered_cds = $schema->resultset('CD')->search_cds_ordered();
393
394 Using SQL functions on the left hand side of a comparison
395 Using SQL functions on the left hand side of a comparison is generally
396 not a good idea since it requires a scan of the entire table. (Unless
397 your RDBMS supports indexes on expressions - including return values of
398 functions - and you create an index on the return value of the function
399 in question.) However, it can be accomplished with "DBIx::Class" when
400 necessary by resorting to literal SQL:
401
402 $rs->search(
403 \[ 'YEAR(date_of_birth) = ?', 1979 ]
404 );
405
406 # Equivalent SQL:
407 # SELECT * FROM employee WHERE YEAR(date_of_birth) = ?
408
409 To include the function as part of a larger search, use the '-and'
410 keyword to collect the search conditions:
411
412 $rs->search({ -and => [
413 name => 'Bob',
414 \[ 'YEAR(date_of_birth) = ?', 1979 ]
415 ]});
416
417 # Equivalent SQL:
418 # SELECT * FROM employee WHERE name = ? AND YEAR(date_of_birth) = ?
419
420 Note: the syntax for specifying the bind value's datatype and value is
421 explained in "DBIC BIND VALUES" in DBIx::Class::ResultSet.
422
423 See also "Literal SQL with placeholders and bind values (subqueries)"
424 in SQL::Abstract::Classic.
425
426 Software Limits
427 When your RDBMS does not have a working SQL limit mechanism (e.g.
428 Sybase ASE) and GenericSubQ is either too slow or does not work at all,
429 you can try the software_limit DBIx::Class::ResultSet attribute, which
430 skips over records to simulate limits in the Perl layer.
431
432 For example:
433
434 my $paged_rs = $rs->search({}, {
435 rows => 25,
436 page => 3,
437 order_by => [ 'me.last_name' ],
438 software_limit => 1,
439 });
440
441 You can set it as a default for your schema by placing the following in
442 your "Schema.pm":
443
444 __PACKAGE__->default_resultset_attributes({ software_limit => 1 });
445
446 WARNING: If you are dealing with large resultsets and your DBI or
447 ODBC/ADO driver does not have proper cursor support (i.e. it loads the
448 whole resultset into memory) then this feature will be extremely slow
449 and use huge amounts of memory at best, and may cause your process to
450 run out of memory and cause instability on your server at worst,
451 beware!
452
454 Using joins and prefetch
455 You can use the "join" attribute to allow searching on, or sorting your
456 results by, one or more columns in a related table.
457
458 This requires that you have defined the DBIx::Class::Relationship. For
459 example :
460
461 My::Schema::CD->has_many( artists => 'My::Schema::Artist', 'artist_id');
462
463 To return all CDs matching a particular artist name, you specify the
464 name of the relationship ('artists'):
465
466 my $rs = $schema->resultset('CD')->search(
467 {
468 'artists.name' => 'Bob Marley'
469 },
470 {
471 join => 'artists', # join the artist table
472 }
473 );
474
475 # Equivalent SQL:
476 # SELECT cd.* FROM cd
477 # JOIN artist ON cd.artist = artist.id
478 # WHERE artist.name = 'Bob Marley'
479
480 In that example both the join, and the condition use the relationship
481 name rather than the table name (see DBIx::Class::Manual::Joining for
482 more details on aliasing ).
483
484 If required, you can now sort on any column in the related tables by
485 including it in your "order_by" attribute, (again using the aliased
486 relation name rather than table name) :
487
488 my $rs = $schema->resultset('CD')->search(
489 {
490 'artists.name' => 'Bob Marley'
491 },
492 {
493 join => 'artists',
494 order_by => [qw/ artists.name /]
495 }
496 );
497
498 # Equivalent SQL:
499 # SELECT cd.* FROM cd
500 # JOIN artist ON cd.artist = artist.id
501 # WHERE artist.name = 'Bob Marley'
502 # ORDER BY artist.name
503
504 Note that the "join" attribute should only be used when you need to
505 search or sort using columns in a related table. Joining related tables
506 when you only need columns from the main table will make performance
507 worse!
508
509 Now let's say you want to display a list of CDs, each with the name of
510 the artist. The following will work fine:
511
512 while (my $cd = $rs->next) {
513 print "CD: " . $cd->title . ", Artist: " . $cd->artist->name;
514 }
515
516 There is a problem however. We have searched both the "cd" and "artist"
517 tables in our main query, but we have only returned data from the "cd"
518 table. To get the artist name for any of the CD objects returned,
519 DBIx::Class will go back to the database:
520
521 SELECT artist.* FROM artist WHERE artist.id = ?
522
523 A statement like the one above will run for each and every CD returned
524 by our main query. Five CDs, five extra queries. A hundred CDs, one
525 hundred extra queries!
526
527 Thankfully, DBIx::Class has a "prefetch" attribute to solve this
528 problem. This allows you to fetch results from related tables in
529 advance:
530
531 my $rs = $schema->resultset('CD')->search(
532 {
533 'artists.name' => 'Bob Marley'
534 },
535 {
536 join => 'artists',
537 order_by => [qw/ artists.name /],
538 prefetch => 'artists' # return artist data too!
539 }
540 );
541
542 # Equivalent SQL (note SELECT from both "cd" and "artist"):
543 # SELECT cd.*, artist.* FROM cd
544 # JOIN artist ON cd.artist = artist.id
545 # WHERE artist.name = 'Bob Marley'
546 # ORDER BY artist.name
547
548 The code to print the CD list remains the same:
549
550 while (my $cd = $rs->next) {
551 print "CD: " . $cd->title . ", Artist: " . $cd->artist->name;
552 }
553
554 DBIx::Class has now prefetched all matching data from the "artist"
555 table, so no additional SQL statements are executed. You now have a
556 much more efficient query.
557
558 Also note that "prefetch" should only be used when you know you will
559 definitely use data from a related table. Pre-fetching related tables
560 when you only need columns from the main table will make performance
561 worse!
562
563 Multiple joins
564 In the examples above, the "join" attribute was a scalar. If you pass
565 an array reference instead, you can join to multiple tables. In this
566 example, we want to limit the search further, using "LinerNotes":
567
568 # Relationships defined elsewhere:
569 # CD->belongs_to('artist' => 'Artist');
570 # CD->has_one('liner_notes' => 'LinerNotes', 'cd');
571 my $rs = $schema->resultset('CD')->search(
572 {
573 'artist.name' => 'Bob Marley'
574 'liner_notes.notes' => { 'like', '%some text%' },
575 },
576 {
577 join => [qw/ artist liner_notes /],
578 order_by => [qw/ artist.name /],
579 }
580 );
581
582 # Equivalent SQL:
583 # SELECT cd.*, artist.*, liner_notes.* FROM cd
584 # JOIN artist ON cd.artist = artist.id
585 # JOIN liner_notes ON cd.id = liner_notes.cd
586 # WHERE artist.name = 'Bob Marley' AND liner_notes.notes LIKE '%some text%'
587 # ORDER BY artist.name
588
589 Multi-step joins
590 Sometimes you want to join more than one relationship deep. In this
591 example, we want to find all "Artist" objects who have "CD"s whose
592 "LinerNotes" contain a specific string:
593
594 # Relationships defined elsewhere:
595 # Artist->has_many('cds' => 'CD', 'artist');
596 # CD->has_one('liner_notes' => 'LinerNotes', 'cd');
597
598 my $rs = $schema->resultset('Artist')->search(
599 {
600 'liner_notes.notes' => { 'like', '%some text%' },
601 },
602 {
603 join => {
604 'cds' => 'liner_notes'
605 }
606 }
607 );
608
609 # Equivalent SQL:
610 # SELECT artist.* FROM artist
611 # LEFT JOIN cd ON artist.id = cd.artist
612 # LEFT JOIN liner_notes ON cd.id = liner_notes.cd
613 # WHERE liner_notes.notes LIKE '%some text%'
614
615 Joins can be nested to an arbitrary level. So if we decide later that
616 we want to reduce the number of Artists returned based on who wrote the
617 liner notes:
618
619 # Relationship defined elsewhere:
620 # LinerNotes->belongs_to('author' => 'Person');
621
622 my $rs = $schema->resultset('Artist')->search(
623 {
624 'liner_notes.notes' => { 'like', '%some text%' },
625 'author.name' => 'A. Writer'
626 },
627 {
628 join => {
629 'cds' => {
630 'liner_notes' => 'author'
631 }
632 }
633 }
634 );
635
636 # Equivalent SQL:
637 # SELECT artist.* FROM artist
638 # LEFT JOIN cd ON artist.id = cd.artist
639 # LEFT JOIN liner_notes ON cd.id = liner_notes.cd
640 # LEFT JOIN author ON author.id = liner_notes.author
641 # WHERE liner_notes.notes LIKE '%some text%'
642 # AND author.name = 'A. Writer'
643
644 Multi-step and multiple joins
645 With various combinations of array and hash references, you can join
646 tables in any combination you desire. For example, to join Artist to
647 CD and Concert, and join CD to LinerNotes:
648
649 # Relationships defined elsewhere:
650 # Artist->has_many('concerts' => 'Concert', 'artist');
651
652 my $rs = $schema->resultset('Artist')->search(
653 { },
654 {
655 join => [
656 {
657 cds => 'liner_notes'
658 },
659 'concerts'
660 ],
661 }
662 );
663
664 # Equivalent SQL:
665 # SELECT artist.* FROM artist
666 # LEFT JOIN cd ON artist.id = cd.artist
667 # LEFT JOIN liner_notes ON cd.id = liner_notes.cd
668 # LEFT JOIN concert ON artist.id = concert.artist
669
670 Multi-step prefetch
671 "prefetch" can be nested more than one relationship deep using the same
672 syntax as a multi-step join:
673
674 my $rs = $schema->resultset('Tag')->search(
675 {},
676 {
677 prefetch => {
678 cd => 'artist'
679 }
680 }
681 );
682
683 # Equivalent SQL:
684 # SELECT tag.*, cd.*, artist.* FROM tag
685 # JOIN cd ON tag.cd = cd.id
686 # JOIN artist ON cd.artist = artist.id
687
688 Now accessing our "cd" and "artist" relationships does not need
689 additional SQL statements:
690
691 my $tag = $rs->first;
692 print $tag->cd->artist->name;
693
695 Retrieving a result object's Schema
696 It is possible to get a Schema object from a result object like so:
697
698 my $schema = $cd->result_source->schema;
699 # use the schema as normal:
700 my $artist_rs = $schema->resultset('Artist');
701
702 This can be useful when you don't want to pass around a Schema object
703 to every method.
704
705 Getting the value of the primary key for the last database insert
706 AKA getting last_insert_id
707
708 Thanks to the core component PK::Auto, this is straightforward:
709
710 my $foo = $rs->create(\%blah);
711 # do more stuff
712 my $id = $foo->id; # foo->my_primary_key_field will also work.
713
714 If you are not using autoincrementing primary keys, this will probably
715 not work, but then you already know the value of the last primary key
716 anyway.
717
718 Stringification
719 Employ the standard stringification technique by using the overload
720 module.
721
722 To make an object stringify itself as a single column, use something
723 like this (replace "name" with the column/method of your choice):
724
725 use overload '""' => sub { shift->name}, fallback => 1;
726
727 For more complex stringification, you can use an anonymous subroutine:
728
729 use overload '""' => sub { $_[0]->name . ", " .
730 $_[0]->address }, fallback => 1;
731
732 Stringification Example
733
734 Suppose we have two tables: "Product" and "Category". The table
735 specifications are:
736
737 Product(id, Description, category)
738 Category(id, Description)
739
740 "category" is a foreign key into the Category table.
741
742 If you have a Product object $obj and write something like
743
744 print $obj->category
745
746 things will not work as expected.
747
748 To obtain, for example, the category description, you should add this
749 method to the class defining the Category table:
750
751 use overload "" => sub {
752 my $self = shift;
753
754 return $self->Description;
755 }, fallback => 1;
756
757 Want to know if find_or_create found or created a row?
758 Just use "find_or_new" instead, then check "in_storage":
759
760 my $obj = $rs->find_or_new({ blah => 'blarg' });
761 unless ($obj->in_storage) {
762 $obj->insert;
763 # do whatever else you wanted if it was a new row
764 }
765
766 Static sub-classing DBIx::Class result classes
767 AKA adding additional relationships/methods/etc. to a model for a
768 specific usage of the (shared) model.
769
770 Schema definition
771
772 package My::App::Schema;
773
774 use base 'DBIx::Class::Schema';
775
776 # load subclassed classes from My::App::Schema::Result/ResultSet
777 __PACKAGE__->load_namespaces;
778
779 # load classes from shared model
780 load_classes({
781 'My::Shared::Model::Result' => [qw/
782 Foo
783 Bar
784 /]});
785
786 1;
787
788 Result-Subclass definition
789
790 package My::App::Schema::Result::Baz;
791
792 use strict;
793 use warnings;
794 use base 'My::Shared::Model::Result::Baz';
795
796 # WARNING: Make sure you call table() again in your subclass,
797 # otherwise DBIx::Class::ResultSourceProxy::Table will not be called
798 # and the class name is not correctly registered as a source
799 __PACKAGE__->table('baz');
800
801 sub additional_method {
802 return "I'm an additional method only needed by this app";
803 }
804
805 1;
806
807 Dynamic Sub-classing DBIx::Class proxy classes
808 AKA multi-class object inflation from one table
809
810 DBIx::Class classes are proxy classes, therefore some different
811 techniques need to be employed for more than basic subclassing. In
812 this example we have a single user table that carries a boolean bit for
813 admin. We would like to give the admin users objects
814 (DBIx::Class::Row) the same methods as a regular user but also special
815 admin only methods. It doesn't make sense to create two separate
816 proxy-class files for this. We would be copying all the user methods
817 into the Admin class. There is a cleaner way to accomplish this.
818
819 Overriding the "inflate_result" method within the User proxy-class
820 gives us the effect we want. This method is called by
821 DBIx::Class::ResultSet when inflating a result from storage. So we
822 grab the object being returned, inspect the values we are looking for,
823 bless it if it's an admin object, and then return it. See the example
824 below:
825
826 Schema Definition
827
828 package My::Schema;
829
830 use base qw/DBIx::Class::Schema/;
831
832 __PACKAGE__->load_namespaces;
833
834 1;
835
836 Proxy-Class definitions
837
838 package My::Schema::Result::User;
839
840 use strict;
841 use warnings;
842 use base qw/DBIx::Class::Core/;
843
844 ### Define what our admin class is, for ensure_class_loaded()
845 my $admin_class = __PACKAGE__ . '::Admin';
846
847 __PACKAGE__->table('users');
848
849 __PACKAGE__->add_columns(qw/user_id email password
850 firstname lastname active
851 admin/);
852
853 __PACKAGE__->set_primary_key('user_id');
854
855 sub inflate_result {
856 my $self = shift;
857 my $ret = $self->next::method(@_);
858 if( $ret->admin ) {### If this is an admin, rebless for extra functions
859 $self->ensure_class_loaded( $admin_class );
860 bless $ret, $admin_class;
861 }
862 return $ret;
863 }
864
865 sub hello {
866 print "I am a regular user.\n";
867 return ;
868 }
869
870 1;
871
872
873 package My::Schema::Result::User::Admin;
874
875 use strict;
876 use warnings;
877 use base qw/My::Schema::Result::User/;
878
879 # This line is important
880 __PACKAGE__->table('users');
881
882 sub hello
883 {
884 print "I am an admin.\n";
885 return;
886 }
887
888 sub do_admin_stuff
889 {
890 print "I am doing admin stuff\n";
891 return ;
892 }
893
894 1;
895
896 Test File test.pl
897
898 use warnings;
899 use strict;
900 use My::Schema;
901
902 my $user_data = { email => 'someguy@place.com',
903 password => 'pass1',
904 admin => 0 };
905
906 my $admin_data = { email => 'someadmin@adminplace.com',
907 password => 'pass2',
908 admin => 1 };
909
910 my $schema = My::Schema->connection('dbi:Pg:dbname=test');
911
912 $schema->resultset('User')->create( $user_data );
913 $schema->resultset('User')->create( $admin_data );
914
915 ### Now we search for them
916 my $user = $schema->resultset('User')->single( $user_data );
917 my $admin = $schema->resultset('User')->single( $admin_data );
918
919 print ref $user, "\n";
920 print ref $admin, "\n";
921
922 print $user->password , "\n"; # pass1
923 print $admin->password , "\n";# pass2; inherited from User
924 print $user->hello , "\n";# I am a regular user.
925 print $admin->hello, "\n";# I am an admin.
926
927 ### The statement below will NOT print
928 print "I can do admin stuff\n" if $user->can('do_admin_stuff');
929 ### The statement below will print
930 print "I can do admin stuff\n" if $admin->can('do_admin_stuff');
931
932 Alternatively you can use DBIx::Class::DynamicSubclass that implements
933 exactly the above functionality.
934
935 Skip result object creation for faster results
936 DBIx::Class is not built for speed, it's built for convenience and ease
937 of use, but sometimes you just need to get the data, and skip the fancy
938 objects.
939
940 To do this simply use DBIx::Class::ResultClass::HashRefInflator.
941
942 my $rs = $schema->resultset('CD');
943
944 $rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
945
946 my $hash_ref = $rs->find(1);
947
948 Wasn't that easy?
949
950 Beware, changing the Result class using "result_class" in
951 DBIx::Class::ResultSet will replace any existing class completely
952 including any special components loaded using load_components, eg
953 DBIx::Class::InflateColumn::DateTime.
954
955 Get raw data for blindingly fast results
956 If the HashRefInflator solution above is not fast enough for you, you
957 can use a DBIx::Class to return values exactly as they come out of the
958 database with none of the convenience methods wrapped round them.
959
960 This is used like so:
961
962 my $cursor = $rs->cursor
963 while (my @vals = $cursor->next) {
964 # use $val[0..n] here
965 }
966
967 You will need to map the array offsets to particular columns (you can
968 use the "select" in DBIx::Class::ResultSet attribute of "search" in
969 DBIx::Class::ResultSet to force ordering).
970
972 Getting Schema from a ResultSet
973 To get the DBIx::Class::Schema object from a ResultSet, do the
974 following:
975
976 $rs->result_source->schema
977
978 Getting Columns Of Data
979 AKA Aggregating Data
980
981 If you want to find the sum of a particular column there are several
982 ways, the obvious one is to use search:
983
984 my $rs = $schema->resultset('Items')->search(
985 {},
986 {
987 select => [ { sum => 'Cost' } ],
988 as => [ 'total_cost' ], # remember this 'as' is for DBIx::Class::ResultSet not SQL
989 }
990 );
991 my $tc = $rs->first->get_column('total_cost');
992
993 Or, you can use the DBIx::Class::ResultSetColumn, which gets returned
994 when you ask the "ResultSet" for a column using "get_column":
995
996 my $cost = $schema->resultset('Items')->get_column('Cost');
997 my $tc = $cost->sum;
998
999 With this you can also do:
1000
1001 my $minvalue = $cost->min;
1002 my $maxvalue = $cost->max;
1003
1004 Or just iterate through the values of this column only:
1005
1006 while ( my $c = $cost->next ) {
1007 print $c;
1008 }
1009
1010 foreach my $c ($cost->all) {
1011 print $c;
1012 }
1013
1014 "ResultSetColumn" only has a limited number of built-in functions. If
1015 you need one that it doesn't have, then you can use the "func" method
1016 instead:
1017
1018 my $avg = $cost->func('AVERAGE');
1019
1020 This will cause the following SQL statement to be run:
1021
1022 SELECT AVERAGE(Cost) FROM Items me
1023
1024 Which will of course only work if your database supports this function.
1025 See DBIx::Class::ResultSetColumn for more documentation.
1026
1027 Creating a result set from a set of rows
1028 Sometimes you have a (set of) result objects that you want to put into
1029 a resultset without the need to hit the DB again. You can do that by
1030 using the set_cache method:
1031
1032 my @uploadable_groups;
1033 while (my $group = $groups->next) {
1034 if ($group->can_upload($self)) {
1035 push @uploadable_groups, $group;
1036 }
1037 }
1038 my $new_rs = $self->result_source->resultset;
1039 $new_rs->set_cache(\@uploadable_groups);
1040 return $new_rs;
1041
1043 Create a new row in a related table
1044 my $author = $book->create_related('author', { name => 'Fred'});
1045
1046 Search in a related table
1047 Only searches for books named 'Titanic' by the author in $author.
1048
1049 my $books_rs = $author->search_related('books', { name => 'Titanic' });
1050
1051 Delete data in a related table
1052 Deletes only the book named Titanic by the author in $author.
1053
1054 $author->delete_related('books', { name => 'Titanic' });
1055
1056 Ordering a relationship result set
1057 If you always want a relation to be ordered, you can specify this when
1058 you create the relationship.
1059
1060 To order "$book->pages" by descending page_number, create the relation
1061 as follows:
1062
1063 __PACKAGE__->has_many('pages' => 'Page', 'book', { order_by => { -desc => 'page_number'} } );
1064
1065 Filtering a relationship result set
1066 If you want to get a filtered result set, you can just add to $attr as
1067 follows:
1068
1069 __PACKAGE__->has_many('pages' => 'Page', 'book', { where => { scrap => 0 } } );
1070
1071 Many-to-many relationship bridges
1072 This is straightforward using ManyToMany:
1073
1074 package My::User;
1075 use base 'DBIx::Class::Core';
1076 __PACKAGE__->table('user');
1077 __PACKAGE__->add_columns(qw/id name/);
1078 __PACKAGE__->set_primary_key('id');
1079 __PACKAGE__->has_many('user_address' => 'My::UserAddress', 'user');
1080 __PACKAGE__->many_to_many('addresses' => 'user_address', 'address');
1081
1082 package My::UserAddress;
1083 use base 'DBIx::Class::Core';
1084 __PACKAGE__->table('user_address');
1085 __PACKAGE__->add_columns(qw/user address/);
1086 __PACKAGE__->set_primary_key(qw/user address/);
1087 __PACKAGE__->belongs_to('user' => 'My::User');
1088 __PACKAGE__->belongs_to('address' => 'My::Address');
1089
1090 package My::Address;
1091 use base 'DBIx::Class::Core';
1092 __PACKAGE__->table('address');
1093 __PACKAGE__->add_columns(qw/id street town area_code country/);
1094 __PACKAGE__->set_primary_key('id');
1095 __PACKAGE__->has_many('user_address' => 'My::UserAddress', 'address');
1096 __PACKAGE__->many_to_many('users' => 'user_address', 'user');
1097
1098 $rs = $user->addresses(); # get all addresses for a user
1099 $rs = $address->users(); # get all users for an address
1100
1101 my $address = $user->add_to_addresses( # returns a My::Address instance,
1102 # NOT a My::UserAddress instance!
1103 {
1104 country => 'United Kingdom',
1105 area_code => 'XYZ',
1106 town => 'London',
1107 street => 'Sesame',
1108 }
1109 );
1110
1111 Relationships across DB schemas
1112 Mapping relationships across DB schemas is easy as long as the schemas
1113 themselves are all accessible via the same DBI connection. In most
1114 cases, this means that they are on the same database host as each other
1115 and your connecting database user has the proper permissions to them.
1116
1117 To accomplish this one only needs to specify the DB schema name in the
1118 table declaration, like so...
1119
1120 package MyApp::Schema::Result::Artist;
1121 use base qw/DBIx::Class::Core/;
1122
1123 __PACKAGE__->table('database1.artist'); # will use "database1.artist" in FROM clause
1124
1125 __PACKAGE__->add_columns(qw/ artist_id name /);
1126 __PACKAGE__->set_primary_key('artist_id');
1127 __PACKAGE__->has_many('cds' => 'MyApp::Schema::Result::Cd');
1128
1129 1;
1130
1131 Whatever string you specify there will be used to build the "FROM"
1132 clause in SQL queries.
1133
1134 The big drawback to this is you now have DB schema names hardcoded in
1135 your class files. This becomes especially troublesome if you have
1136 multiple instances of your application to support a change lifecycle
1137 (e.g. DEV, TEST, PROD) and the DB schemas are named based on the
1138 environment (e.g. database1_dev).
1139
1140 However, one can dynamically "map" to the proper DB schema by
1141 overriding the connection method in your Schema class and building a
1142 renaming facility, like so:
1143
1144 package MyApp::Schema;
1145 use Moose;
1146
1147 extends 'DBIx::Class::Schema';
1148
1149 around connection => sub {
1150 my ( $inner, $self, $dsn, $username, $pass, $attr ) = ( shift, @_ );
1151
1152 my $postfix = delete $attr->{schema_name_postfix};
1153
1154 $inner->(@_);
1155
1156 if ( $postfix ) {
1157 $self->append_db_name($postfix);
1158 }
1159 };
1160
1161 sub append_db_name {
1162 my ( $self, $postfix ) = @_;
1163
1164 my @sources_with_db
1165 = grep
1166 { $_->name =~ /^\w+\./mx }
1167 map
1168 { $self->source($_) }
1169 $self->sources;
1170
1171 foreach my $source (@sources_with_db) {
1172 my $name = $source->name;
1173 $name =~ s{^(\w+)\.}{${1}${postfix}\.}mx;
1174
1175 $source->name($name);
1176 }
1177 }
1178
1179 1;
1180
1181 By overriding the connection method and extracting a custom option from
1182 the provided \%attr hashref one can then simply iterate over all the
1183 Schema's ResultSources, renaming them as needed.
1184
1185 To use this facility, simply add or modify the \%attr hashref that is
1186 passed to connection, as follows:
1187
1188 my $schema
1189 = MyApp::Schema->connect(
1190 $dsn,
1191 $user,
1192 $pass,
1193 {
1194 schema_name_postfix => '_dev'
1195 # ... Other options as desired ...
1196 })
1197
1198 Obviously, one could accomplish even more advanced mapping via a hash
1199 map or a callback routine.
1200
1202 Transactions with txn_do
1203 As of version 0.04001, there is improved transaction support in
1204 DBIx::Class::Storage and DBIx::Class::Schema. Here is an example of
1205 the recommended way to use it:
1206
1207 my $genus = $schema->resultset('Genus')->find(12);
1208
1209 my $coderef2 = sub {
1210 $genus->extinct(1);
1211 $genus->update;
1212 };
1213
1214 my $coderef1 = sub {
1215 $genus->add_to_species({ name => 'troglodyte' });
1216 $genus->wings(2);
1217 $genus->update;
1218 $schema->txn_do($coderef2); # Can have a nested transaction. Only the outer will actualy commit
1219 return $genus->species;
1220 };
1221
1222 use Try::Tiny;
1223 my $rs;
1224 try {
1225 $rs = $schema->txn_do($coderef1);
1226 } catch {
1227 # Transaction failed
1228 die "the sky is falling!" #
1229 if ($_ =~ /Rollback failed/); # Rollback failed
1230
1231 deal_with_failed_transaction();
1232 };
1233
1234 Note: by default "txn_do" will re-run the coderef one more time if an
1235 error occurs due to client disconnection (e.g. the server is bounced).
1236 You need to make sure that your coderef can be invoked multiple times
1237 without terrible side effects.
1238
1239 Nested transactions will work as expected. That is, only the outermost
1240 transaction will actually issue a commit to the $dbh, and a rollback at
1241 any level of any transaction will cause the entire nested transaction
1242 to fail.
1243
1244 Nested transactions and auto-savepoints
1245 If savepoints are supported by your RDBMS, it is possible to achieve
1246 true nested transactions with minimal effort. To enable auto-savepoints
1247 via nested transactions, supply the "auto_savepoint = 1" connection
1248 attribute.
1249
1250 Here is an example of true nested transactions. In the example, we
1251 start a big task which will create several rows. Generation of data for
1252 each row is a fragile operation and might fail. If we fail creating
1253 something, depending on the type of failure, we want to abort the whole
1254 task, or only skip the failed row.
1255
1256 my $schema = MySchema->connect("dbi:Pg:dbname=my_db");
1257
1258 # Start a transaction. Every database change from here on will only be
1259 # committed into the database if the try block succeeds.
1260 use Try::Tiny;
1261 my $exception;
1262 try {
1263 $schema->txn_do(sub {
1264 # SQL: BEGIN WORK;
1265
1266 my $job = $schema->resultset('Job')->create({ name=> 'big job' });
1267 # SQL: INSERT INTO job ( name) VALUES ( 'big job' );
1268
1269 for (1..10) {
1270
1271 # Start a nested transaction, which in fact sets a savepoint.
1272 try {
1273 $schema->txn_do(sub {
1274 # SQL: SAVEPOINT savepoint_0;
1275
1276 my $thing = $schema->resultset('Thing')->create({ job=>$job->id });
1277 # SQL: INSERT INTO thing ( job) VALUES ( 1 );
1278
1279 if (rand > 0.8) {
1280 # This will generate an error, thus setting $@
1281
1282 $thing->update({force_fail=>'foo'});
1283 # SQL: UPDATE thing SET force_fail = 'foo'
1284 # WHERE ( id = 42 );
1285 }
1286 });
1287 } catch {
1288 # SQL: ROLLBACK TO SAVEPOINT savepoint_0;
1289
1290 # There was an error while creating a $thing. Depending on the error
1291 # we want to abort the whole transaction, or only rollback the
1292 # changes related to the creation of this $thing
1293
1294 # Abort the whole job
1295 if ($_ =~ /horrible_problem/) {
1296 print "something horrible happened, aborting job!";
1297 die $_; # rethrow error
1298 }
1299
1300 # Ignore this $thing, report the error, and continue with the
1301 # next $thing
1302 print "Cannot create thing: $_";
1303 }
1304 # There was no error, so save all changes since the last
1305 # savepoint.
1306
1307 # SQL: RELEASE SAVEPOINT savepoint_0;
1308 }
1309 });
1310 } catch {
1311 $exception = $_;
1312 };
1313
1314 if ($exception) {
1315 # There was an error while handling the $job. Rollback all changes
1316 # since the transaction started, including the already committed
1317 # ('released') savepoints. There will be neither a new $job nor any
1318 # $thing entry in the database.
1319
1320 # SQL: ROLLBACK;
1321
1322 print "ERROR: $exception\n";
1323 }
1324 else {
1325 # There was no error while handling the $job. Commit all changes.
1326 # Only now other connections can see the newly created $job and
1327 # @things.
1328
1329 # SQL: COMMIT;
1330
1331 print "Ok\n";
1332 }
1333
1334 In this example it might be hard to see where the rollbacks, releases
1335 and commits are happening, but it works just the same as for plain
1336 txn_do: If the try-block around txn_do fails, a rollback is issued. If
1337 the try succeeds, the transaction is committed (or the savepoint
1338 released).
1339
1340 While you can get more fine-grained control using "svp_begin",
1341 "svp_release" and "svp_rollback", it is strongly recommended to use
1342 "txn_do" with coderefs.
1343
1344 Simple Transactions with DBIx::Class::Storage::TxnScopeGuard
1345 An easy way to use transactions is with
1346 DBIx::Class::Storage::TxnScopeGuard. See "Automatically creating
1347 related objects" for an example.
1348
1349 Note that unlike txn_do, TxnScopeGuard will only make sure the
1350 connection is alive when issuing the "BEGIN" statement. It will not
1351 (and really can not) retry if the server goes away mid-operations,
1352 unlike "txn_do".
1353
1355 Creating Schemas From An Existing Database
1356 DBIx::Class::Schema::Loader will connect to a database and create a
1357 DBIx::Class::Schema and associated sources by examining the database.
1358
1359 The recommend way of achieving this is to use the dbicdump utility or
1360 the Catalyst helper, as described in Manual::Intro.
1361
1362 Alternatively, use the make_schema_at method:
1363
1364 perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib \
1365 -e 'make_schema_at("My::Schema", \
1366 { db_schema => 'myschema', components => ["InflateColumn::DateTime"] }, \
1367 [ "dbi:Pg:dbname=foo", "username", "password" ])'
1368
1369 This will create a tree of files rooted at "./lib/My/Schema/"
1370 containing source definitions for all the tables found in the
1371 "myschema" schema in the "foo" database.
1372
1373 Creating DDL SQL
1374 The following functionality requires you to have SQL::Translator (also
1375 known as "SQL Fairy") installed.
1376
1377 To create a set of database-specific .sql files for the above schema:
1378
1379 my $schema = My::Schema->connect($dsn);
1380 $schema->create_ddl_dir(['MySQL', 'SQLite', 'PostgreSQL'],
1381 '0.1',
1382 './dbscriptdir/'
1383 );
1384
1385 By default this will create schema files in the current directory, for
1386 MySQL, SQLite and PostgreSQL, using the $VERSION from your Schema.pm.
1387
1388 To create a new database using the schema:
1389
1390 my $schema = My::Schema->connect($dsn);
1391 $schema->deploy({ add_drop_table => 1});
1392
1393 To import created .sql files using the mysql client:
1394
1395 mysql -h "host" -D "database" -u "user" -p < My_Schema_1.0_MySQL.sql
1396
1397 To create "ALTER TABLE" conversion scripts to update a database to a
1398 newer version of your schema at a later point, first set a new $VERSION
1399 in your Schema file, then:
1400
1401 my $schema = My::Schema->connect($dsn);
1402 $schema->create_ddl_dir(['MySQL', 'SQLite', 'PostgreSQL'],
1403 '0.2',
1404 '/dbscriptdir/',
1405 '0.1'
1406 );
1407
1408 This will produce new database-specific .sql files for the new version
1409 of the schema, plus scripts to convert from version 0.1 to 0.2. This
1410 requires that the files for 0.1 as created above are available in the
1411 given directory to diff against.
1412
1413 Select from dual
1414 Dummy tables are needed by some databases to allow calling functions or
1415 expressions that aren't based on table content, for examples of how
1416 this applies to various database types, see:
1417 <http://troels.arvin.dk/db/rdbms/#other-dummy_table>.
1418
1419 Note: If you're using Oracles dual table don't ever do anything other
1420 than a select, if you CRUD on your dual table you *will* break your
1421 database.
1422
1423 Make a table class as you would for any other table
1424
1425 package MyAppDB::Dual;
1426 use strict;
1427 use warnings;
1428 use base 'DBIx::Class::Core';
1429 __PACKAGE__->table("Dual");
1430 __PACKAGE__->add_columns(
1431 "dummy",
1432 { data_type => "VARCHAR2", is_nullable => 0, size => 1 },
1433 );
1434
1435 Once you've loaded your table class select from it using "select" and
1436 "as" instead of "columns"
1437
1438 my $rs = $schema->resultset('Dual')->search(undef,
1439 { select => [ 'sydate' ],
1440 as => [ 'now' ]
1441 },
1442 );
1443
1444 All you have to do now is be careful how you access your resultset, the
1445 below will not work because there is no column called 'now' in the Dual
1446 table class
1447
1448 while (my $dual = $rs->next) {
1449 print $dual->now."\n";
1450 }
1451 # Can't locate object method "now" via package "MyAppDB::Dual" at headshot.pl line 23.
1452
1453 You could of course use 'dummy' in "as" instead of 'now', or
1454 "add_columns" to your Dual class for whatever you wanted to select from
1455 dual, but that's just silly, instead use "get_column"
1456
1457 while (my $dual = $rs->next) {
1458 print $dual->get_column('now')."\n";
1459 }
1460
1461 Or use "cursor"
1462
1463 my $cursor = $rs->cursor;
1464 while (my @vals = $cursor->next) {
1465 print $vals[0]."\n";
1466 }
1467
1468 In case you're going to use this "trick" together with "deploy" in
1469 DBIx::Class::Schema or "create_ddl_dir" in DBIx::Class::Schema a table
1470 called "dual" will be created in your current schema. This would
1471 overlap "sys.dual" and you could not fetch "sysdate" or
1472 "sequence.nextval" anymore from dual. To avoid this problem, just tell
1473 SQL::Translator to not create table dual:
1474
1475 my $sqlt_args = {
1476 add_drop_table => 1,
1477 parser_args => { sources => [ grep $_ ne 'Dual', schema->sources ] },
1478 };
1479 $schema->create_ddl_dir( [qw/Oracle/], undef, './sql', undef, $sqlt_args );
1480
1481 Or use DBIx::Class::ResultClass::HashRefInflator
1482
1483 $rs->result_class('DBIx::Class::ResultClass::HashRefInflator');
1484 while ( my $dual = $rs->next ) {
1485 print $dual->{now}."\n";
1486 }
1487
1488 Here are some example "select" conditions to illustrate the different
1489 syntax you could use for doing stuff like
1490 "oracles.heavily(nested(functions_can('take', 'lots'), OF), 'args')"
1491
1492 # get a sequence value
1493 select => [ 'A_SEQ.nextval' ],
1494
1495 # get create table sql
1496 select => [ { 'dbms_metadata.get_ddl' => [ "'TABLE'", "'ARTIST'" ]} ],
1497
1498 # get a random num between 0 and 100
1499 select => [ { "trunc" => [ { "dbms_random.value" => [0,100] } ]} ],
1500
1501 # what year is it?
1502 select => [ { 'extract' => [ \'year from sysdate' ] } ],
1503
1504 # do some math
1505 select => [ {'round' => [{'cos' => [ \'180 * 3.14159265359/180' ]}]}],
1506
1507 # which day of the week were you born on?
1508 select => [{'to_char' => [{'to_date' => [ "'25-DEC-1980'", "'dd-mon-yyyy'" ]}, "'day'"]}],
1509
1510 # select 16 rows from dual
1511 select => [ "'hello'" ],
1512 as => [ 'world' ],
1513 group_by => [ 'cube( 1, 2, 3, 4 )' ],
1514
1515 Adding Indexes And Functions To Your SQL
1516 Often you will want indexes on columns on your table to speed up
1517 searching. To do this, create a method called "sqlt_deploy_hook" in the
1518 relevant source class (refer to the advanced callback system if you
1519 wish to share a hook between multiple sources):
1520
1521 package My::Schema::Result::Artist;
1522
1523 __PACKAGE__->table('artist');
1524 __PACKAGE__->add_columns(id => { ... }, name => { ... })
1525
1526 sub sqlt_deploy_hook {
1527 my ($self, $sqlt_table) = @_;
1528
1529 $sqlt_table->add_index(name => 'idx_name', fields => ['name']);
1530 }
1531
1532 1;
1533
1534 Sometimes you might want to change the index depending on the type of
1535 the database for which SQL is being generated:
1536
1537 my ($db_type = $sqlt_table->schema->translator->producer_type)
1538 =~ s/^SQL::Translator::Producer:://;
1539
1540 You can also add hooks to the schema level to stop certain tables being
1541 created:
1542
1543 package My::Schema;
1544
1545 ...
1546
1547 sub sqlt_deploy_hook {
1548 my ($self, $sqlt_schema) = @_;
1549
1550 $sqlt_schema->drop_table('table_name');
1551 }
1552
1553 You could also add views, procedures or triggers to the output using
1554 "add_view" in SQL::Translator::Schema, "add_procedure" in
1555 SQL::Translator::Schema or "add_trigger" in SQL::Translator::Schema.
1556
1557 Schema versioning
1558 The following example shows simplistically how you might use
1559 DBIx::Class to deploy versioned schemas to your customers. The basic
1560 process is as follows:
1561
1562 1. Create a DBIx::Class schema
1563
1564 2. Save the schema
1565
1566 3. Deploy to customers
1567
1568 4. Modify schema to change functionality
1569
1570 5. Deploy update to customers
1571
1572 Create a DBIx::Class schema
1573
1574 This can either be done manually, or generated from an existing
1575 database as described under "Creating Schemas From An Existing
1576 Database"
1577
1578 Save the schema
1579
1580 Call "create_ddl_dir" in DBIx::Class::Schema as above under "Creating
1581 DDL SQL".
1582
1583 Deploy to customers
1584
1585 There are several ways you could deploy your schema. These are probably
1586 beyond the scope of this recipe, but might include:
1587
1588 1. Require customer to apply manually using their RDBMS.
1589
1590 2. Package along with your app, making database dump/schema
1591 update/tests all part of your install.
1592
1593 Modify the schema to change functionality
1594
1595 As your application evolves, it may be necessary to modify your schema
1596 to change functionality. Once the changes are made to your schema in
1597 DBIx::Class, export the modified schema and the conversion scripts as
1598 in "Creating DDL SQL".
1599
1600 Deploy update to customers
1601
1602 Add the DBIx::Class::Schema::Versioned schema component to your Schema
1603 class. This will add a new table to your database called
1604 "dbix_class_schema_vesion" which will keep track of which version is
1605 installed and warn if the user tries to run a newer schema version than
1606 the database thinks it has.
1607
1608 Alternatively, you can send the conversion SQL scripts to your
1609 customers as above.
1610
1611 Setting quoting for the generated SQL
1612 If the database contains column names with spaces and/or reserved
1613 words, they need to be quoted in the SQL queries. This is done using:
1614
1615 $schema->storage->sql_maker->quote_char([ qw/[ ]/] );
1616 $schema->storage->sql_maker->name_sep('.');
1617
1618 The first sets the quote characters. Either a pair of matching
1619 brackets, or a """ or "'":
1620
1621 $schema->storage->sql_maker->quote_char('"');
1622
1623 Check the documentation of your database for the correct quote
1624 characters to use. "name_sep" needs to be set to allow the SQL
1625 generator to put the quotes the correct place, and defaults to "." if
1626 not supplied.
1627
1628 In most cases you should set these as part of the arguments passed to
1629 "connect" in DBIx::Class::Schema:
1630
1631 my $schema = My::Schema->connect(
1632 'dbi:mysql:my_db',
1633 'db_user',
1634 'db_password',
1635 {
1636 quote_char => '"',
1637 name_sep => '.'
1638 }
1639 )
1640
1641 In some cases, quoting will be required for all users of a schema. To
1642 enforce this, you can also overload the "connection" method for your
1643 schema class:
1644
1645 sub connection {
1646 my $self = shift;
1647 my $rv = $self->next::method( @_ );
1648 $rv->storage->sql_maker->quote_char([ qw/[ ]/ ]);
1649 $rv->storage->sql_maker->name_sep('.');
1650 return $rv;
1651 }
1652
1653 Working with PostgreSQL array types
1654 You can also assign values to PostgreSQL array columns by passing array
1655 references in the "\%columns" ("\%vals") hashref of the "create" in
1656 DBIx::Class::ResultSet and "update" in DBIx::Class::Row family of
1657 methods:
1658
1659 $resultset->create({
1660 numbers => [1, 2, 3]
1661 });
1662
1663 $result->update(
1664 {
1665 numbers => [1, 2, 3]
1666 },
1667 {
1668 day => '2008-11-24'
1669 }
1670 );
1671
1672 In conditions (e.g. "\%cond" in the "search" in DBIx::Class::ResultSet
1673 family of methods) you cannot directly use array references (since this
1674 is interpreted as a list of values to be "OR"ed), but you can use the
1675 following syntax to force passing them as bind values:
1676
1677 $resultset->search(
1678 {
1679 numbers => { -value => [1, 2, 3] }
1680 }
1681 );
1682
1683 Formatting DateTime objects in queries
1684 To ensure "WHERE" conditions containing DateTime arguments are properly
1685 formatted to be understood by your RDBMS, you must use the DateTime
1686 formatter returned by "datetime_parser" in DBIx::Class::Storage::DBI to
1687 format any DateTime objects you pass to search conditions. Any Storage
1688 object attached to your Schema provides a correct DateTime formatter,
1689 so all you have to do is:
1690
1691 my $dtf = $schema->storage->datetime_parser;
1692 my $rs = $schema->resultset('users')->search(
1693 {
1694 signup_date => {
1695 -between => [
1696 $dtf->format_datetime($dt_start),
1697 $dtf->format_datetime($dt_end),
1698 ],
1699 }
1700 },
1701 );
1702
1703 Without doing this the query will contain the simple stringification of
1704 the "DateTime" object, which almost never matches the RDBMS
1705 expectations.
1706
1707 This kludge is necessary only for conditions passed to search and
1708 "find" in DBIx::Class::ResultSet, whereas create and "update" in
1709 DBIx::Class::Row (but not "update" in DBIx::Class::ResultSet) are
1710 DBIx::Class::InflateColumn-aware and will do the right thing when
1711 supplied an inflated DateTime object.
1712
1713 Using Unicode
1714 When using unicode character data there are two alternatives - either
1715 your database supports unicode characters (including setting the utf8
1716 flag on the returned string), or you need to encode/decode data
1717 appropriately each time a string field is inserted into or retrieved
1718 from the database. It is better to avoid encoding/decoding data and to
1719 use your database's own unicode capabilities if at all possible.
1720
1721 The DBIx::Class::UTF8Columns component handles storing selected unicode
1722 columns in a database that does not directly support unicode. If used
1723 with a database that does correctly handle unicode then strange and
1724 unexpected data corrupt will occur.
1725
1726 The Catalyst Wiki Unicode page at
1727 <http://wiki.catalystframework.org/wiki/tutorialsandhowtos/using_unicode>
1728 has additional information on the use of Unicode with Catalyst and
1729 DBIx::Class.
1730
1731 The following databases do correctly handle unicode data:-
1732
1733 MySQL
1734
1735 MySQL supports unicode, and will correctly flag utf8 data from the
1736 database if the "mysql_enable_utf8" is set in the connect options.
1737
1738 my $schema = My::Schema->connection('dbi:mysql:dbname=test',
1739 $user, $pass,
1740 { mysql_enable_utf8 => 1} );
1741
1742 When set, a data retrieved from a textual column type (char, varchar,
1743 etc) will have the UTF-8 flag turned on if necessary. This enables
1744 character semantics on that string. You will also need to ensure that
1745 your database / table / column is configured to use UTF8. See Chapter
1746 10 of the mysql manual for details.
1747
1748 See DBD::mysql for further details.
1749
1750 Oracle
1751
1752 Information about Oracle support for unicode can be found in "UNICODE"
1753 in DBD::Oracle.
1754
1755 PostgreSQL
1756
1757 PostgreSQL supports unicode if the character set is correctly set at
1758 database creation time. Additionally the "pg_enable_utf8" should be set
1759 to ensure unicode data is correctly marked.
1760
1761 my $schema = My::Schema->connection('dbi:Pg:dbname=test',
1762 $user, $pass,
1763 { pg_enable_utf8 => 1} );
1764
1765 Further information can be found in DBD::Pg.
1766
1767 SQLite
1768
1769 SQLite version 3 and above natively use unicode internally. To
1770 correctly mark unicode strings taken from the database, the
1771 "sqlite_unicode" flag should be set at connect time (in versions of
1772 DBD::SQLite prior to 1.27 this attribute was named "unicode").
1773
1774 my $schema = My::Schema->connection('dbi:SQLite:/tmp/test.db',
1775 '', '',
1776 { sqlite_unicode => 1} );
1777
1779 Easy migration from class-based to schema-based setup
1780 You want to start using the schema-based approach to DBIx::Class (see
1781 "Setting it up manually" in DBIx::Class::Manual::Intro), but have an
1782 established class-based setup with lots of existing classes that you
1783 don't want to move by hand. Try this nifty script instead:
1784
1785 use MyDB;
1786 use SQL::Translator;
1787
1788 my $schema = MyDB->schema_instance;
1789
1790 my $translator = SQL::Translator->new(
1791 debug => $debug || 0,
1792 trace => $trace || 0,
1793 no_comments => $no_comments || 0,
1794 show_warnings => $show_warnings || 0,
1795 add_drop_table => $add_drop_table || 0,
1796 validate => $validate || 0,
1797 parser_args => {
1798 'DBIx::Schema' => $schema,
1799 },
1800 producer_args => {
1801 'prefix' => 'My::Schema',
1802 },
1803 );
1804
1805 $translator->parser('SQL::Translator::Parser::DBIx::Class');
1806 $translator->producer('SQL::Translator::Producer::DBIx::Class::File');
1807
1808 my $output = $translator->translate(@args) or die
1809 "Error: " . $translator->error;
1810
1811 print $output;
1812
1813 You could use Module::Find to search for all subclasses in the MyDB::*
1814 namespace, which is currently left as an exercise for the reader.
1815
1817 DBIx::Class uses the Class::C3 package, which provides for redispatch
1818 of method calls, useful for things like default values and triggers.
1819 You have to use calls to "next::method" to overload methods. More
1820 information on using Class::C3 with DBIx::Class can be found in
1821 DBIx::Class::Manual::Component.
1822
1823 Setting default values for a row
1824 It's as simple as overriding the "new" method. Note the use of
1825 "next::method".
1826
1827 sub new {
1828 my ( $class, $attrs ) = @_;
1829
1830 $attrs->{foo} = 'bar' unless defined $attrs->{foo};
1831
1832 my $new = $class->next::method($attrs);
1833
1834 return $new;
1835 }
1836
1837 For more information about "next::method", look in the Class::C3
1838 documentation. See also DBIx::Class::Manual::Component for more ways to
1839 write your own base classes to do this.
1840
1841 People looking for ways to do "triggers" with DBIx::Class are probably
1842 just looking for this.
1843
1844 Changing one field whenever another changes
1845 For example, say that you have three columns, "id", "number", and
1846 "squared". You would like to make changes to "number" and have
1847 "squared" be automagically set to the value of "number" squared. You
1848 can accomplish this by wrapping the "number" accessor with the "around"
1849 method modifier, available through either Class::Method::Modifiers,
1850 Moose or Moose-like modules):
1851
1852 around number => sub {
1853 my ($orig, $self) = (shift, shift);
1854
1855 if (@_) {
1856 my $value = $_[0];
1857 $self->squared( $value * $value );
1858 }
1859
1860 $self->$orig(@_);
1861 };
1862
1863 Note that the hard work is done by the call to "$self->$orig", which
1864 redispatches your call to store_column in the superclass(es).
1865
1866 Generally, if this is a calculation your database can easily do, try
1867 and avoid storing the calculated value, it is safer to calculate when
1868 needed, than rely on the data being in sync.
1869
1870 Automatically creating related objects
1871 You might have a class "Artist" which has many "CD"s. Further, you
1872 want to create a "CD" object every time you insert an "Artist" object.
1873 You can accomplish this by overriding "insert" on your objects:
1874
1875 sub insert {
1876 my ( $self, @args ) = @_;
1877 $self->next::method(@args);
1878 $self->create_related ('cds', \%initial_cd_data );
1879 return $self;
1880 }
1881
1882 If you want to wrap the two inserts in a transaction (for consistency,
1883 an excellent idea), you can use the awesome
1884 DBIx::Class::Storage::TxnScopeGuard:
1885
1886 sub insert {
1887 my ( $self, @args ) = @_;
1888
1889 my $guard = $self->result_source->schema->txn_scope_guard;
1890
1891 $self->next::method(@args);
1892 $self->create_related ('cds', \%initial_cd_data );
1893
1894 $guard->commit;
1895
1896 return $self
1897 }
1898
1899 Wrapping/overloading a column accessor
1900 Problem:
1901
1902 Say you have a table "Camera" and want to associate a description with
1903 each camera. For most cameras, you'll be able to generate the
1904 description from the other columns. However, in a few special cases you
1905 may want to associate a custom description with a camera.
1906
1907 Solution:
1908
1909 In your database schema, define a description field in the "Camera"
1910 table that can contain text and null values.
1911
1912 In DBIC, we'll overload the column accessor to provide a sane default
1913 if no custom description is defined. The accessor will either return or
1914 generate the description, depending on whether the field is null or
1915 not.
1916
1917 First, in your "Camera" schema class, define the description field as
1918 follows:
1919
1920 __PACKAGE__->add_columns(description => { accessor => '_description' });
1921
1922 Next, we'll define the accessor-wrapper subroutine:
1923
1924 sub description {
1925 my $self = shift;
1926
1927 # If there is an update to the column, we'll let the original accessor
1928 # deal with it.
1929 return $self->_description(@_) if @_;
1930
1931 # Fetch the column value.
1932 my $description = $self->_description;
1933
1934 # If there's something in the description field, then just return that.
1935 return $description if defined $description && length $descripton;
1936
1937 # Otherwise, generate a description.
1938 return $self->generate_description;
1939 }
1940
1942 DBIx::Class objects with Data::Dumper
1943 Data::Dumper can be a very useful tool for debugging, but sometimes it
1944 can be hard to find the pertinent data in all the data it can generate.
1945 Specifically, if one naively tries to use it like so,
1946
1947 use Data::Dumper;
1948
1949 my $cd = $schema->resultset('CD')->find(1);
1950 print Dumper($cd);
1951
1952 several pages worth of data from the CD object's schema and result
1953 source will be dumped to the screen. Since usually one is only
1954 interested in a few column values of the object, this is not very
1955 helpful.
1956
1957 Luckily, it is possible to modify the data before Data::Dumper outputs
1958 it. Simply define a hook that Data::Dumper will call on the object
1959 before dumping it. For example,
1960
1961 package My::DB::CD;
1962
1963 sub _dumper_hook {
1964 $_[0] = bless {
1965 %{ $_[0] },
1966 result_source => undef,
1967 }, ref($_[0]);
1968 }
1969
1970 [...]
1971
1972 use Data::Dumper;
1973
1974 local $Data::Dumper::Freezer = '_dumper_hook';
1975
1976 my $cd = $schema->resultset('CD')->find(1);
1977 print Dumper($cd);
1978 # dumps $cd without its ResultSource
1979
1980 If the structure of your schema is such that there is a common base
1981 class for all your table classes, simply put a method similar to
1982 "_dumper_hook" in the base class and set $Data::Dumper::Freezer to its
1983 name and Data::Dumper will automagically clean up your data before
1984 printing it. See "EXAMPLES" in Data::Dumper for more information.
1985
1986 Profiling
1987 When you enable DBIx::Class::Storage's debugging it prints the SQL
1988 executed as well as notifications of query completion and transaction
1989 begin/commit. If you'd like to profile the SQL you can subclass the
1990 DBIx::Class::Storage::Statistics class and write your own profiling
1991 mechanism:
1992
1993 package My::Profiler;
1994 use strict;
1995
1996 use base 'DBIx::Class::Storage::Statistics';
1997
1998 use Time::HiRes qw(time);
1999
2000 my $start;
2001
2002 sub query_start {
2003 my $self = shift();
2004 my $sql = shift();
2005 my @params = @_;
2006
2007 $self->print("Executing $sql: ".join(', ', @params)."\n");
2008 $start = time();
2009 }
2010
2011 sub query_end {
2012 my $self = shift();
2013 my $sql = shift();
2014 my @params = @_;
2015
2016 my $elapsed = sprintf("%0.4f", time() - $start);
2017 $self->print("Execution took $elapsed seconds.\n");
2018 $start = undef;
2019 }
2020
2021 1;
2022
2023 You can then install that class as the debugging object:
2024
2025 __PACKAGE__->storage->debugobj(new My::Profiler());
2026 __PACKAGE__->storage->debug(1);
2027
2028 A more complicated example might involve storing each execution of SQL
2029 in an array:
2030
2031 sub query_end {
2032 my $self = shift();
2033 my $sql = shift();
2034 my @params = @_;
2035
2036 my $elapsed = time() - $start;
2037 push(@{ $calls{$sql} }, {
2038 params => \@params,
2039 elapsed => $elapsed
2040 });
2041 }
2042
2043 You could then create average, high and low execution times for an SQL
2044 statement and dig down to see if certain parameters cause aberrant
2045 behavior. You might want to check out DBIx::Class::QueryLog as well.
2046
2048 • Install Class::XSAccessor to speed up Class::Accessor::Grouped.
2049
2050 • On Perl 5.8 install Class::C3::XS.
2051
2052 • prefetch relationships, where possible. See "Using joins and
2053 prefetch".
2054
2055 • Use populate in void context to insert data when you don't need the
2056 resulting result objects, if possible, but see the caveats.
2057
2058 When inserting many rows, for best results, populate a large number
2059 of rows at a time, but not so large that the table is locked for an
2060 unacceptably long time.
2061
2062 If using create instead, use a transaction and commit every "X"
2063 rows; where "X" gives you the best performance without locking the
2064 table for too long.
2065
2066 • When selecting many rows, if you don't need full-blown
2067 DBIx::Class::Row objects, consider using
2068 DBIx::Class::ResultClass::HashRefInflator.
2069
2070 • See also "STARTUP SPEED" and "MEMORY USAGE" in this document.
2071
2073 DBIx::Class programs can have a significant startup delay as the ORM
2074 loads all the relevant classes. This section examines techniques for
2075 reducing the startup delay.
2076
2077 These tips are listed in order of decreasing effectiveness - so the
2078 first tip, if applicable, should have the greatest effect on your
2079 application.
2080
2081 Statically Define Your Schema
2082 If you are using DBIx::Class::Schema::Loader to build the classes
2083 dynamically based on the database schema then there will be a
2084 significant startup delay.
2085
2086 For production use a statically defined schema (which can be generated
2087 using DBIx::Class::Schema::Loader to dump the database schema once -
2088 see make_schema_at and dump_directory for more details on creating
2089 static schemas from a database).
2090
2091 Move Common Startup into a Base Class
2092 Typically DBIx::Class result classes start off with
2093
2094 use base qw/DBIx::Class::Core/;
2095 __PACKAGE__->load_components(qw/InflateColumn::DateTime/);
2096
2097 If this preamble is moved into a common base class:-
2098
2099 package MyDBICbase;
2100
2101 use base qw/DBIx::Class::Core/;
2102 __PACKAGE__->load_components(qw/InflateColumn::DateTime/);
2103 1;
2104
2105 and each result class then uses this as a base:-
2106
2107 use base qw/MyDBICbase/;
2108
2109 then the load_components is only performed once, which can result in a
2110 considerable startup speedup for schemas with many classes.
2111
2112 Explicitly List Schema Result Classes
2113 The schema class will normally contain
2114
2115 __PACKAGE__->load_classes();
2116
2117 to load the result classes. This will use Module::Find to find and load
2118 the appropriate modules. Explicitly defining the classes you wish to
2119 load will remove the overhead of Module::Find and the related directory
2120 operations:
2121
2122 __PACKAGE__->load_classes(qw/ CD Artist Track /);
2123
2124 If you are instead using the load_namespaces syntax to load the
2125 appropriate classes there is not a direct alternative avoiding
2126 Module::Find.
2127
2129 Cached statements
2130 DBIx::Class normally caches all statements with prepare_cached(). This
2131 is normally a good idea, but if too many statements are cached, the
2132 database may use too much memory and may eventually run out and fail
2133 entirely. If you suspect this may be the case, you may want to examine
2134 DBI's CachedKids hash:
2135
2136 # print all currently cached prepared statements
2137 print for keys %{$schema->storage->dbh->{CachedKids}};
2138 # get a count of currently cached prepared statements
2139 my $count = scalar keys %{$schema->storage->dbh->{CachedKids}};
2140
2141 If it's appropriate, you can simply clear these statements,
2142 automatically deallocating them in the database:
2143
2144 my $kids = $schema->storage->dbh->{CachedKids};
2145 delete @{$kids}{keys %$kids} if scalar keys %$kids > 100;
2146
2147 But what you probably want is to expire unused statements and not those
2148 that are used frequently. You can accomplish this with Tie::Cache or
2149 Tie::Cache::LRU:
2150
2151 use Tie::Cache;
2152 use DB::Main;
2153 my $schema = DB::Main->connect($dbi_dsn, $user, $pass, {
2154 on_connect_do => sub { tie %{shift->_dbh->{CachedKids}}, 'Tie::Cache', 100 },
2155 });
2156
2158 Check the list of additional DBIC resources.
2159
2161 This module is free software copyright by the DBIx::Class (DBIC)
2162 authors. You can redistribute it and/or modify it under the same terms
2163 as the DBIx::Class library.
2164
2165
2166
2167perl v5.36.0 2022-07-22 DBIx::Class::Manual::Cookbook(3)