1SQL::Translator::ManualU(s3e)r Contributed Perl DocumentaStQiLo:n:Translator::Manual(3)
2
3
4

NAME

6       SQL::Translator::Manual - sqlfairy user manual
7

SYNOPSIS

9       SQL::Translator (AKA "SQLFairy") is a collection of modules for
10       transforming (mainly) SQL DDL files into a variety of other formats,
11       including other SQL dialects, documentation, images, and code.  In this
12       manual, we will attempt to address how to use SQLFairy for common
13       tasks.  For a lower-level discussion of how the code works, please read
14       the documentation for SQL::Translator.
15
16       It may prove helpful to have a general understanding of the SQLFairy
17       code before continuing.  The code can be broken into three conceptual
18       groupings:
19
20       ·   Parsers
21
22           The parsers are responsible for reading the input files and
23           describing them to the Schema object middleware.
24
25       ·   Producers
26
27           The producers create the output as described by the Schema
28           middleware.
29
30       ·   Schema objects
31
32           The Schema objects bridge the communication between the Parsers and
33           Producers by representing any parsed file through a standard set of
34           generic objects to represent concepts like Tables, Fields
35           (columns), Indices, Constraints, etc.
36
37       It's not necessary to understand how to write or manipulate any of
38       these for most common tasks, but you should aware of the concepts as
39       they will be referenced later in this document.
40

SQLFAIRY SCRIPTS

42       Most common tasks can be accomplished through the use of the script
43       interfaces to the SQL::Translator code.  All SQLFairy scripts begin
44       with "sqlt."  Here are the scripts and a description of what they each
45       do:
46
47       ·   sqlt
48
49           This is the main interface for text-to-text translations, e.g.,
50           converting a MySQL schema to Oracle.
51
52       ·   sqlt-diagram
53
54           This is a tailored interface for the Diagram producer and its many
55           myriad options.
56
57       ·   sqlt-diff
58
59           This script will examine two schemas and report the SQL commands
60           (ALTER, CREATE) needed to turn the first schema into the second.
61
62       ·   sqlt-dumper
63
64           This script generates a Perl script that can be used to connect to
65           a database and dump the data in each table in different formats,
66           similar to the "mysqldump" program.
67
68       ·   sqlt-graph
69
70           This is an interface to the GraphViz visualization tool and its
71           myriad options.
72
73       ·   sqlt.cgi
74
75           This is a CGI script that presents an HTML form for uploading or
76           pasting a schema and choosing an output and the output options.
77
78       To read the full documentation for each script, use "perldoc" (or
79       execute any of the command-line scripts with the "--help" flag).
80

CONVERTING SQL DIALECTS

82       Probably the most common task SQLFairy is used for is to convert one
83       dialect of SQL to another.  If you have a text description of an SQL
84       database (AKA a "DDL" -- "Data Definition Language"), then you should
85       use the "sqlt" script with switches to indicate the parser and producer
86       and the name of the text file as the final argument.  For example, to
87       convert the "foo.sql" MySQL schema to a version suitable for
88       PostgreSQL, you would do the following:
89
90         $ sqlt -f MySQL -t PostgreSQL foo.sql > foo-pg.sql
91
92       The "from" and "to" options are case-sensitive and must match exactly
93       the names of the Parser and Producer classes in SQL::Translator.  For a
94       complete listing of your options, execute "sqlt" with the "--list"
95       flag.
96

EXTRACT SQL SCHEMAS DIRECTLY FROM DATABASE

98       It is possible to extract some schemas directly from the database
99       without parsing a text file (the "foo.sql" in the above example).  This
100       can prove significantly faster than parsing a text file.  To do this,
101       use the "DBI" parser and provide the necessary arguments to connect to
102       the database and indicate the producer class, like so:
103
104         $ sqlt -f DBI --dsn dbi:mysql:FOO --db-user guest \
105           --db-password p4ssw0rd -t PostgreSQL > foo
106
107       The "--list" option to "sqlt" will show the databases supported by DBI
108       parsers.
109

HANDLING NON-SQL DATA

111       Certain structured document formats can be easily thought of as tables.
112       SQLFairy can parse Microsoft Excel spreadsheets and arbitrarily
113       delimited text files just as if they were schemas which contained only
114       one table definition.  The column names are normalized to something
115       sane for most databases (whitespace is converted to underscores and
116       non-word characters are removed), and the data in each field is scanned
117       to determine the appropriate data type (character, integer, or float)
118       and size.  For instance, to convert a comma-separated file to an SQLite
119       database, do the following:
120
121         $ sqlt -f xSV --fs ',' -t SQLite foo.csv > foo-sqlite.sql
122
123       Additionally, there is a non-SQL representation of relational schemas
124       namely XML.  Additionally, the only XML supported is our own version;
125       however, it would be fairly easy to add an XML parser for something
126       like the TorqueDB (http://db.apache.org/torque/) project.  The actual
127       parsing of XML should be trivial given the number of XML parsers
128       available, so all that would be left would be to map the specific
129       concepts in the source file to the Schema objects in SQLFairy.
130
131       To convert a schema in SQLFairy's XML dialect to Oracle, do the
132       following:
133
134         $ sqlt -f XML-SQLFairy -t Oracle foo.xml > foo-oracle.sql
135

SERIALIZING SCHEMAS

137       Parsing a schema is generally the most computationally expensive
138       operation performed by SQLFairy, so it may behoove you to serialize a
139       parsed schema if you need to perform repeated conversions.  For
140       example, as part of a build process the author converts a MySQL schema
141       first to YAML, then to PostgreSQL, Oracle, SQLite and Sybase.
142       Additionally, a variety of documentation in HTML and images is
143       produced.  This can be accomplished like so:
144
145         $ sqlt -f MySQL -t YAML schema-mysql.sql > schema.yaml
146         $ sqlt -f YAML -t Oracle schema.yaml > schema-oracle.sql
147         $ sqlt -f YAML -t PostgreSQL schema.yaml > schema-postgresql.sql
148         $ ...
149
150       SQLFairy has three serialization producers, none of which is superior
151       to the other in their description of a schema.
152
153       ·   XML-SQLFairy
154
155           This is the aforementioned XML format.  It is essentially a direct
156           mapping of the Schema objects into XML.  This can also provide a
157           very convenient bridge to describing a schema to a non-Perl
158           application.  Providing a producer argument to "sqlt" of just "XML"
159           will default to using "XML-SQLFairy."
160
161       ·   Storable
162
163           This producer stores the Schema object using Perl's Storable.pm
164           module available on CPAN.
165
166       ·   YAML
167
168           This producer serialized the Schema object with the very readable
169           structured data format of YAML (http://www.yaml.org/).  Earlier
170           examples show serializing to YAML.
171

VISUALIZING SQL SCHEMAS

173       The visualization tools in SQLFairy can graphically represent the
174       tables, fields, datatypes and sizes, constraints, and foreign key
175       relationships in a very compact and intuitive format.  This can be very
176       beneficial in understanding and document large or small schemas.  Two
177       producers in SQLFairy will create pseudo-E/R (entity-relationship)
178       diagrams:
179
180       ·   Diagram
181
182           The first visualization tool in SQLFairy, this producer uses libgd
183           to draw a picture of the schema.  The tables are evenly distributed
184           in definition order running in columns (i.e., no graphing
185           algorithms are used), so the many of the lines showing the foreign
186           key relationships may cross over each other and the table boxes.
187           Please read the documentation of the "sqlt-diagram" script for all
188           the options available to this producer.
189
190       ·   GraphViz
191
192           The layout of the GraphViz producer is far superior to the Diagram
193           producer as it uses the Graphviz binary from Bell Labs to create
194           very professional-looking graphs.  There are several different
195           layout algorithms and node shapes available.  Please see the
196           documentation of the "sqlt-graph" script for more information.
197

AUTOMATED CODE-GENERATION

199       Given that so many applications interact with SQL databases, it's no
200       wonder that people have automated code to deal with this interaction.
201       Class::DBI from CPAN is one such module that allows a developer to
202       describe the relationships between tables and fields in class
203       declarations and then generates all the SQL to interact (SELECT,
204       UPDATE, DELETE, INSERT statements) at runtime.  Obviously, the schema
205       already describes itself, so it only makes sense that you should be
206       able to generate this kind of code directly from the schema.  The
207       "ClassDBI" producer in SQLFairy does just this, creating a Perl module
208       that inherits from Class::DBI and sets up most of the code needed to
209       interact with the database.  Here is an example of how to do this:
210
211         $ sqlt -f MySQL -t ClassDBI foo.sql > Foo.pm
212
213       Then simply edit Foo.pm as needed and include it in your code.
214

CREATING A DATA DUMPER SCRIPT

216       The Dumper producer creates a Perl script that can select the fields in
217       each table and then create "INSERT" statements for each record in the
218       database similar to the output generated by MySQL's "mysqldump"
219       program:
220
221         $ sqlt -f YAML -t Dumper --dumper-db-user guest \
222         > --dumper-db-pass p4ssw0rd --dumper-dsn dbi:mysql:FOO \
223         > foo.yaml > foo-dumper.pl
224
225       And then execute the resulting script to dump the data:
226
227         $ chmod +x foo-dumper.pl
228         $ ./foo-dumper.pl > foo-data.sql
229
230       The dumper script also has a number of options available.  Execute the
231       script with the "--help" flag to read about them.
232

DOCUMENTING WITH SQL::TRANSLATOR

234       SQLFairy offers two producers to help document schemas:
235
236       ·   HTML
237
238           This producer creates a single HTML document which uses HTML
239           formatting to describe the Schema objects and to create hyperlinks
240           on foreign key relationships.  This can be a surprisingly useful
241           documentation aid as it creates a very readable format that allows
242           one to jump easily to specific tables and fields.  It's also
243           possible to plugin your own CSS to further control the presentation
244           of the HTML.
245
246       ·   POD
247
248           This is arguably not that useful of a producer by itself, but the
249           number of POD-conversion tools could be used to further transform
250           the POD into something more interesting.  The schema is basically
251           represented in POD sections where tables are broken down into
252           fields, indices, constraints, foreign keys, etc.
253

TEMPLATE-BASED MANIPULATION OF SCHEMA OBJECTS

255       All of the producers which create text output could have been coded
256       using a templating system to mix in the dynamic output with static
257       text.  CPAN offers several diverse templating systems, but few are as
258       powerful as Template Toolkit (http://www.template-toolkit.org/).  You
259       can easily create your own producer without writing any Perl code at
260       all simply by writing a template using Template Toolkit's syntax.  The
261       template will be passed a reference to the Schema object briefly
262       described at the beginning of this document and mentioned many times
263       throughout.  For example, you could create a template that simply
264       prints the name of each table and field that looks like this:
265
266         # file: schema.tt
267         [% FOREACH table IN schema.get_tables %]
268         Table: [% table.name %]
269         Fields:
270         [% FOREACH field IN table.get_fields -%]
271           [% field.name %]
272         [% END -%]
273         [% END %]
274
275       And then process it like so:
276
277         $ sqlt -f YAML -t TTSchema --template schema.tt foo.yaml
278
279       To create output like this:
280
281         Table: foo
282         Fields:
283           foo_id
284           foo_name
285
286       For more information on Template Toolkit, please install the "Template"
287       module and read the POD.
288

FINDING THE DIFFERENCES BETWEEN TWO SCHEMAS

290       As mentioned above, the "sqlt-diff" schema examines two schemas and
291       creates SQL schema modification statements that can be used to
292       transform the first schema into the second.  The flag syntax is
293       somewhat quirky:
294
295         $ sqlt-diff foo-v1.sql=MySQL foo-v2.sql=Oracle > diff.sql
296
297       As demonstrated, the schemas need not even be from the same vendor,
298       though this is likely to produce some spurious results as datatypes are
299       not currently viewed equivalent unless they match exactly, even if they
300       would be converted to the same.  For example, MySQL's "integer" data
301       type would be converted to Oracle's "number," but the differ isn't
302       quite smart enough yet to figure this out.  Also, as the SQL to ALTER a
303       field definition varies from database vendor to vendor, these
304       statements are made using just the keyword "CHANGE" and will likely
305       need to be corrected for the target database.
306

A UNIFIED GRAPHICAL INTERFACE

308       Seeing all the above options and scripts, you may be pining for a
309       single, graphical interface to handle all these transformations and
310       choices.  This is exactly what the "sqlt.cgi" script provides.  Simply
311       drop this script into your web server's CGI directory and enable the
312       execute bit and you can point your web browser to an HTML form which
313       provides a simple interface to all the SQLFairy parsers and producers.
314

PLUGIN YOUR OWN PARSERS AND PRODUCERS

316       Now that you have seen how the parsers and producers interact via the
317       Schema objects, you may wish to create your own versions to plugin.
318
319       Producers are probably the easier concept to grok, so let's cover that
320       first.  By far the easiest way to create custom output is to use the
321       TTSchema producer in conjunction with a Template Toolkit template as
322       described earlier.  However, you can also easily pass a reference to a
323       subroutine that SQL::Translator can call for the production of the
324       output.  This subroutine will be passed a single argument of the
325       SQL::Translator object which you can use to access the Schema objects.
326       Please read the POD for SQL::Translator and SQL::Translator::Schema to
327       learn the methods you can call.  Here is a very simple example:
328
329         #!/usr/bin/perl
330
331         use strict;
332         use SQL::Translator;
333
334         my $input = q[
335             create table foo (
336                 foo_id int not null default '0' primary key,
337                 foo_name varchar(30) not null default ''
338             );
339
340             create table bar (
341                 bar_id int not null default '0' primary key,
342                 bar_value varchar(100) not null default ''
343             );
344         ];
345
346         my $t = SQL::Translator->new;
347         $t->parser('MySQL') or die $t->error;
348         $t->producer( \&produce ) or die $t->error;
349         my $output = $t->translate( \$input ) or die $t->error;
350         print $output;
351
352         sub produce {
353             my $tr     = shift;
354             my $schema = $tr->schema;
355             my $output = '';
356             for my $t ( $schema->get_tables ) {
357                 $output .= join('', "Table = ", $t->name, "\n");
358             }
359             return $output;
360         }
361
362       Executing this script produces the following:
363
364         $ ./my-producer.pl
365         Table = foo
366         Table = bar
367
368       A custom parser will be passed two arguments:  the SQL::Translator
369       object and the data to be parsed.  In this example, the schema will be
370       represented in a simple text format.  Each line is a table definition
371       where the fields are separated by colons.  The first field is the table
372       name and the following fields are column definitions where the column
373       name, data type and size are separated by spaces.  The specifics of the
374       example are unimportant -- what is being demonstrated is that you have
375       to decide how to parse the incoming data and then map the concepts in
376       the data to the Schema object.
377
378         #!/usr/bin/perl
379
380         use strict;
381         use SQL::Translator;
382
383         my $input =
384             "foo:foo_id int 11:foo_name varchar 30\n" .
385             "bar:bar_id int 11:bar_value varchar 30"
386         ;
387
388         my $t = SQL::Translator->new;
389         $t->parser( \&parser ) or die $t->error;
390         $t->producer('Oracle') or die $t->error;
391         my $output = $t->translate( \$input ) or die $t->error;
392         print $output;
393
394         sub parser {
395             my ( $tr, $data ) = @_;
396             my $schema = $tr->schema;
397
398             for my $line ( split( /\n/, $data ) ) {
399                 my ( $table_name, @fields ) = split( /:/, $line );
400                 my $table = $schema->add_table( name => $table_name )
401                     or die $schema->error;
402                 for ( @fields ) {
403                     my ( $f_name, $type, $size ) = split;
404                     $table->add_field(
405                         name      => $f_name,
406                         data_type => $type,
407                         size      => $size,
408                     ) or die $table->error;
409                 }
410             }
411
412             return 1;
413         }
414
415       And here is the output produced by this script:
416
417         --
418         -- Created by SQL::Translator::Producer::Oracle
419         -- Created on Wed Mar 31 15:43:30 2004
420         --
421         --
422         -- Table: foo
423         --
424
425         CREATE TABLE foo (
426           foo_id number(11),
427           foo_name varchar2(30)
428         );
429
430         --
431         -- Table: bar
432         --
433
434         CREATE TABLE bar (
435           bar_id number(11),
436           bar_value varchar2(30)
437         );
438
439       If you create a useful parser or producer, you are encouraged to submit
440       your work to the SQLFairy project!
441

PLUGIN TEMPLATE TOOLKIT PRODUCERS

443       You may find that the TTSchema producer doesn't give you enough control
444       over templating and you want to play with the Template config or add
445       you own variables. Or maybe you just have a really good template you
446       want to submit to SQLFairy :) If so, the
447       SQL::Translator::Producer::TT::Base producer may be just for you!
448       Instead of working like a normal producer it provides a base class so
449       you can cheaply build new producer modules based on templates.
450
451       It's simplest use is when we just want to put a single template in its
452       own module. So to create a Foo producer we create a Custom/Foo.pm file
453       as follows, putting our template in the __DATA__ section.
454
455        package Custom::Foo.pm;
456        use base qw/SQL::Translator::Producer::TT::Base/;
457        # Use our new class as the producer
458        sub produce { return __PACKAGE__->new( translator => shift )->run; };
459
460        __DATA__
461        [% FOREACH table IN schema.get_tables %]
462        Table: [% table.name %]
463        Fields:
464        [% FOREACH field IN table.get_fields -%]
465          [% field.name %]
466        [% END -%]
467        [% END %]
468
469       For that we get a producer called Custom::Foo that we can now call like
470       a normal producer (as long as the directory with Custom/Foo.pm is in
471       our @INC path):
472
473        $ sqlt -f YAML -t Custom-Foo foo.yaml
474
475       The template gets variables of "schema" and "translator" to use in
476       building its output. You also get a number of methods you can override
477       to hook into the template generation.
478
479       tt_config Allows you to set the config options used by the Template
480       object.  The Template Toolkit provides a huge number of options which
481       allow you to do all sorts of magic (See Template::Manual::Config for
482       details). This method provides a hook into them by returning a hash of
483       options for the Template. e.g.  Say you want to use the INTERPOLATE
484       option to save some typing in your template;
485
486        sub tt_config { ( INTERPOLATE => 1 ); }
487
488       Another common use for this is to add you own filters to the template:
489
490        sub tt_config {(
491           INTERPOLATE => 1,
492           FILTERS => { foo_filter => \&foo_filter, }
493        );}
494
495       Another common extension is adding your own template variables. This is
496       done with tt_vars:
497
498        sub tt_vars { ( foo => "bar" ); }
499
500       What about using template files instead of DATA sections? You can
501       already - if you give a template on the command line your new producer
502       will use that instead of reading the DATA section:
503
504        $ sqlt -f YAML -t Custom-Foo --template foo.tt foo.yaml
505
506       This is useful as you can set up a producer that adds a set of filters
507       and variables that you can then use in templates given on the command
508       line. (There is also a tt_schema method to over ride if you need even
509       finer control over the source of your template). Note that if you leave
510       out the DATA section all together then your producer will require a
511       template file name to be given.
512
513       See SQL::Translator::Producer::TT::Base for more details.
514

AUTHOR

516       Ken Y. Clark <kclark@cpan.org>.
517
518
519
520perl v5.30.1                      2020-01-30        SQL::Translator::Manual(3)
Impressum