1CSV_XS(3) User Contributed Perl Documentation CSV_XS(3)
2
3
4
6 Text::CSV_XS - comma-separated values manipulation routines
7
9 # Functional interface
10 use Text::CSV_XS qw( csv );
11
12 # Read whole file in memory
13 my $aoa = csv (in => "data.csv"); # as array of array
14 my $aoh = csv (in => "data.csv",
15 headers => "auto"); # as array of hash
16
17 # Write array of arrays as csv file
18 csv (in => $aoa, out => "file.csv", sep_char=> ";");
19
20 # Only show lines where "code" is odd
21 csv (in => "data.csv", filter => { code => sub { $_ % 2 }});
22
23
24 # Object interface
25 use Text::CSV_XS;
26
27 my @rows;
28 # Read/parse CSV
29 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
30 open my $fh, "<:encoding(utf8)", "test.csv" or die "test.csv: $!";
31 while (my $row = $csv->getline ($fh)) {
32 $row->[2] =~ m/pattern/ or next; # 3rd field should match
33 push @rows, $row;
34 }
35 close $fh;
36
37 # and write as CSV
38 open $fh, ">:encoding(utf8)", "new.csv" or die "new.csv: $!";
39 $csv->say ($fh, $_) for @rows;
40 close $fh or die "new.csv: $!";
41
43 Text::CSV_XS provides facilities for the composition and
44 decomposition of comma-separated values. An instance of the
45 Text::CSV_XS class will combine fields into a "CSV" string and parse a
46 "CSV" string into fields.
47
48 The module accepts either strings or files as input and support the
49 use of user-specified characters for delimiters, separators, and
50 escapes.
51
52 Embedded newlines
53 Important Note: The default behavior is to accept only ASCII
54 characters in the range from 0x20 (space) to 0x7E (tilde). This means
55 that the fields can not contain newlines. If your data contains
56 newlines embedded in fields, or characters above 0x7E (tilde), or
57 binary data, you must set "binary => 1" in the call to "new". To cover
58 the widest range of parsing options, you will always want to set
59 binary.
60
61 But you still have the problem that you have to pass a correct line to
62 the "parse" method, which is more complicated from the usual point of
63 usage:
64
65 my $csv = Text::CSV_XS->new ({ binary => 1, eol => $/ });
66 while (<>) { # WRONG!
67 $csv->parse ($_);
68 my @fields = $csv->fields ();
69 }
70
71 this will break, as the "while" might read broken lines: it does not
72 care about the quoting. If you need to support embedded newlines, the
73 way to go is to not pass "eol" in the parser (it accepts "\n", "\r",
74 and "\r\n" by default) and then
75
76 my $csv = Text::CSV_XS->new ({ binary => 1 });
77 open my $fh, "<", $file or die "$file: $!";
78 while (my $row = $csv->getline ($fh)) {
79 my @fields = @$row;
80 }
81
82 The old(er) way of using global file handles is still supported
83
84 while (my $row = $csv->getline (*ARGV)) { ... }
85
86 Unicode
87 Unicode is only tested to work with perl-5.8.2 and up.
88
89 See also "BOM".
90
91 The simplest way to ensure the correct encoding is used for in- and
92 output is by either setting layers on the filehandles, or setting the
93 "encoding" argument for "csv".
94
95 open my $fh, "<:encoding(UTF-8)", "in.csv" or die "in.csv: $!";
96 or
97 my $aoa = csv (in => "in.csv", encoding => "UTF-8");
98
99 open my $fh, ">:encoding(UTF-8)", "out.csv" or die "out.csv: $!";
100 or
101 csv (in => $aoa, out => "out.csv", encoding => "UTF-8");
102
103 On parsing (both for "getline" and "parse"), if the source is marked
104 being UTF8, then all fields that are marked binary will also be marked
105 UTF8.
106
107 On combining ("print" and "combine"): if any of the combining fields
108 was marked UTF8, the resulting string will be marked as UTF8. Note
109 however that all fields before the first field marked UTF8 and
110 contained 8-bit characters that were not upgraded to UTF8, these will
111 be "bytes" in the resulting string too, possibly causing unexpected
112 errors. If you pass data of different encoding, or you don't know if
113 there is different encoding, force it to be upgraded before you pass
114 them on:
115
116 $csv->print ($fh, [ map { utf8::upgrade (my $x = $_); $x } @data ]);
117
118 For complete control over encoding, please use Text::CSV::Encoded:
119
120 use Text::CSV::Encoded;
121 my $csv = Text::CSV::Encoded->new ({
122 encoding_in => "iso-8859-1", # the encoding comes into Perl
123 encoding_out => "cp1252", # the encoding comes out of Perl
124 });
125
126 $csv = Text::CSV::Encoded->new ({ encoding => "utf8" });
127 # combine () and print () accept *literally* utf8 encoded data
128 # parse () and getline () return *literally* utf8 encoded data
129
130 $csv = Text::CSV::Encoded->new ({ encoding => undef }); # default
131 # combine () and print () accept UTF8 marked data
132 # parse () and getline () return UTF8 marked data
133
134 BOM
135 BOM (or Byte Order Mark) handling is available only inside the
136 "header" method. This method supports the following encodings:
137 "utf-8", "utf-1", "utf-32be", "utf-32le", "utf-16be", "utf-16le",
138 "utf-ebcdic", "scsu", "bocu-1", and "gb-18030". See Wikipedia
139 <https://en.wikipedia.org/wiki/Byte_order_mark>.
140
141 If a file has a BOM, the easiest way to deal with that is
142
143 my $aoh = csv (in => $file, detect_bom => 1);
144
145 All records will be encoded based on the detected BOM.
146
147 This implies a call to the "header" method, which defaults to also
148 set the "column_names". So this is not the same as
149
150 my $aoh = csv (in => $file, headers => "auto");
151
152 which only reads the first record to set "column_names" but ignores
153 any meaning of possible present BOM.
154
156 While no formal specification for CSV exists, RFC 4180
157 <http://tools.ietf.org/html/rfc4180> (1) describes the common format
158 and establishes "text/csv" as the MIME type registered with the IANA.
159 RFC 7111 <http://tools.ietf.org/html/rfc7111> (2) adds fragments to
160 CSV.
161
162 Many informal documents exist that describe the "CSV" format. "How
163 To: The Comma Separated Value (CSV) File Format"
164 <http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm> (3) provides an
165 overview of the "CSV" format in the most widely used applications and
166 explains how it can best be used and supported.
167
168 1) http://tools.ietf.org/html/rfc4180
169 2) http://tools.ietf.org/html/rfc7111
170 3) http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm
171
172 The basic rules are as follows:
173
174 CSV is a delimited data format that has fields/columns separated by
175 the comma character and records/rows separated by newlines. Fields that
176 contain a special character (comma, newline, or double quote), must be
177 enclosed in double quotes. However, if a line contains a single entry
178 that is the empty string, it may be enclosed in double quotes. If a
179 field's value contains a double quote character it is escaped by
180 placing another double quote character next to it. The "CSV" file
181 format does not require a specific character encoding, byte order, or
182 line terminator format.
183
184 · Each record is a single line ended by a line feed (ASCII/"LF"=0x0A)
185 or a carriage return and line feed pair (ASCII/"CRLF"="0x0D 0x0A"),
186 however, line-breaks may be embedded.
187
188 · Fields are separated by commas.
189
190 · Allowable characters within a "CSV" field include 0x09 ("TAB") and
191 the inclusive range of 0x20 (space) through 0x7E (tilde). In binary
192 mode all characters are accepted, at least in quoted fields.
193
194 · A field within "CSV" must be surrounded by double-quotes to
195 contain a separator character (comma).
196
197 Though this is the most clear and restrictive definition, Text::CSV_XS
198 is way more liberal than this, and allows extension:
199
200 · Line termination by a single carriage return is accepted by default
201
202 · The separation-, escape-, and escape- characters can be any ASCII
203 character in the range from 0x20 (space) to 0x7E (tilde).
204 Characters outside this range may or may not work as expected.
205 Multibyte characters, like UTF "U+060C" (ARABIC COMMA), "U+FF0C"
206 (FULLWIDTH COMMA), "U+241B" (SYMBOL FOR ESCAPE), "U+2424" (SYMBOL
207 FOR NEWLINE), "U+FF02" (FULLWIDTH QUOTATION MARK), and "U+201C" (LEFT
208 DOUBLE QUOTATION MARK) (to give some examples of what might look
209 promising) work for newer versions of perl for "sep_char", and
210 "quote_char" but not for "escape_char".
211
212 If you use perl-5.8.2 or higher these three attributes are
213 utf8-decoded, to increase the likelihood of success. This way
214 "U+00FE" will be allowed as a quote character.
215
216 · A field in "CSV" must be surrounded by double-quotes to make an
217 embedded double-quote, represented by a pair of consecutive double-
218 quotes, valid. In binary mode you may additionally use the sequence
219 ""0" for representation of a NULL byte. Using 0x00 in binary mode is
220 just as valid.
221
222 · Several violations of the above specification may be lifted by
223 passing some options as attributes to the object constructor.
224
226 version
227 (Class method) Returns the current module version.
228
229 new
230 (Class method) Returns a new instance of class Text::CSV_XS. The
231 attributes are described by the (optional) hash ref "\%attr".
232
233 my $csv = Text::CSV_XS->new ({ attributes ... });
234
235 The following attributes are available:
236
237 eol
238
239 my $csv = Text::CSV_XS->new ({ eol => $/ });
240 $csv->eol (undef);
241 my $eol = $csv->eol;
242
243 The end-of-line string to add to rows for "print" or the record
244 separator for "getline".
245
246 When not passed in a parser instance, the default behavior is to
247 accept "\n", "\r", and "\r\n", so it is probably safer to not specify
248 "eol" at all. Passing "undef" or the empty string behave the same.
249
250 When not passed in a generating instance, records are not terminated
251 at all, so it is probably wise to pass something you expect. A safe
252 choice for "eol" on output is either $/ or "\r\n".
253
254 Common values for "eol" are "\012" ("\n" or Line Feed), "\015\012"
255 ("\r\n" or Carriage Return, Line Feed), and "\015" ("\r" or Carriage
256 Return). The "eol" attribute cannot exceed 7 (ASCII) characters.
257
258 If both $/ and "eol" equal "\015", parsing lines that end on only a
259 Carriage Return without Line Feed, will be "parse"d correct.
260
261 sep_char
262
263 my $csv = Text::CSV_XS->new ({ sep_char => ";" });
264 $csv->sep_char (";");
265 my $c = $csv->sep_char;
266
267 The char used to separate fields, by default a comma. (","). Limited
268 to a single-byte character, usually in the range from 0x20 (space) to
269 0x7E (tilde). When longer sequences are required, use "sep".
270
271 The separation character can not be equal to the quote character or to
272 the escape character.
273
274 See also "CAVEATS"
275
276 sep
277
278 my $csv = Text::CSV_XS->new ({ sep => "\N{FULLWIDTH COMMA}" });
279 $csv->sep (";");
280 my $sep = $csv->sep;
281
282 The chars used to separate fields, by default undefined. Limited to 8
283 bytes.
284
285 When set, overrules "sep_char". If its length is one byte it acts as
286 an alias to "sep_char".
287
288 See also "CAVEATS"
289
290 quote_char
291
292 my $csv = Text::CSV_XS->new ({ quote_char => "'" });
293 $csv->quote_char (undef);
294 my $c = $csv->quote_char;
295
296 The character to quote fields containing blanks or binary data, by
297 default the double quote character ("""). A value of undef suppresses
298 quote chars (for simple cases only). Limited to a single-byte
299 character, usually in the range from 0x20 (space) to 0x7E (tilde).
300 When longer sequences are required, use "quote".
301
302 "quote_char" can not be equal to "sep_char".
303
304 quote
305
306 my $csv = Text::CSV_XS->new ({ quote => "\N{FULLWIDTH QUOTATION MARK}" });
307 $csv->quote ("'");
308 my $quote = $csv->quote;
309
310 The chars used to quote fields, by default undefined. Limited to 8
311 bytes.
312
313 When set, overrules "quote_char". If its length is one byte it acts as
314 an alias to "quote_char".
315
316 See also "CAVEATS"
317
318 escape_char
319
320 my $csv = Text::CSV_XS->new ({ escape_char => "\\" });
321 $csv->escape_char (":");
322 my $c = $csv->escape_char;
323
324 The character to escape certain characters inside quoted fields.
325 This is limited to a single-byte character, usually in the range
326 from 0x20 (space) to 0x7E (tilde).
327
328 The "escape_char" defaults to being the double-quote mark ("""). In
329 other words the same as the default "quote_char". This means that
330 doubling the quote mark in a field escapes it:
331
332 "foo","bar","Escape ""quote mark"" with two ""quote marks""","baz"
333
334 If you change the "quote_char" without changing the
335 "escape_char", the "escape_char" will still be the double-quote
336 ("""). If instead you want to escape the "quote_char" by doubling it
337 you will need to also change the "escape_char" to be the same as what
338 you have changed the "quote_char" to.
339
340 Setting "escape_char" to <undef> or "" will disable escaping completely
341 and is greatly discouraged. This will also disable "escape_null".
342
343 The escape character can not be equal to the separation character.
344
345 binary
346
347 my $csv = Text::CSV_XS->new ({ binary => 1 });
348 $csv->binary (0);
349 my $f = $csv->binary;
350
351 If this attribute is 1, you may use binary characters in quoted
352 fields, including line feeds, carriage returns and "NULL" bytes. (The
353 latter could be escaped as ""0".) By default this feature is off.
354
355 If a string is marked UTF8, "binary" will be turned on automatically
356 when binary characters other than "CR" and "NL" are encountered. Note
357 that a simple string like "\x{00a0}" might still be binary, but not
358 marked UTF8, so setting "{ binary => 1 }" is still a wise option.
359
360 strict
361
362 my $csv = Text::CSV_XS->new ({ strict => 1 });
363 $csv->strict (0);
364 my $f = $csv->strict;
365
366 If this attribute is set to 1, any row that parses to a different
367 number of fields than the previous row will cause the parser to throw
368 error 2014.
369
370 formula_handling
371
372 formula
373
374 my $csv = Text::CSV_XS->new ({ formula => "none" });
375 $csv->formula ("none");
376 my $f = $csv->formula;
377
378 This defines the behavior of fields containing formulas. As formulas
379 are considered dangerous in spreadsheets, this attribute can define an
380 optional action to be taken if a field starts with an equal sign ("=").
381
382 For purpose of code-readability, this can also be written as
383
384 my $csv = Text::CSV_XS->new ({ formula_handling => "none" });
385 $csv->formula_handling ("none");
386 my $f = $csv->formula_handling;
387
388 Possible values for this attribute are
389
390 none
391 Take no specific action. This is the default.
392
393 $csv->formula ("none");
394
395 die
396 Cause the process to "die" whenever a leading "=" is encountered.
397
398 $csv->formula ("die");
399
400 croak
401 Cause the process to "croak" whenever a leading "=" is encountered.
402 (See Carp)
403
404 $csv->formula ("croak");
405
406 diag
407 Report position and content of the field whenever a leading "=" is
408 found. The value of the field is unchanged.
409
410 $csv->formula ("diag");
411
412 empty
413 Replace the content of fields that start with a "=" with the empty
414 string.
415
416 $csv->formula ("empty");
417 $csv->formula ("");
418
419 undef
420 Replace the content of fields that start with a "=" with "undef".
421
422 $csv->formula ("undef");
423 $csv->formula (undef);
424
425 All other values will give a warning and then fallback to "diag".
426
427 decode_utf8
428
429 my $csv = Text::CSV_XS->new ({ decode_utf8 => 1 });
430 $csv->decode_utf8 (0);
431 my $f = $csv->decode_utf8;
432
433 This attributes defaults to TRUE.
434
435 While parsing, fields that are valid UTF-8, are automatically set to
436 be UTF-8, so that
437
438 $csv->parse ("\xC4\xA8\n");
439
440 results in
441
442 PV("\304\250"\0) [UTF8 "\x{128}"]
443
444 Sometimes it might not be a desired action. To prevent those upgrades,
445 set this attribute to false, and the result will be
446
447 PV("\304\250"\0)
448
449 auto_diag
450
451 my $csv = Text::CSV_XS->new ({ auto_diag => 1 });
452 $csv->auto_diag (2);
453 my $l = $csv->auto_diag;
454
455 Set this attribute to a number between 1 and 9 causes "error_diag" to
456 be automatically called in void context upon errors.
457
458 In case of error "2012 - EOF", this call will be void.
459
460 If "auto_diag" is set to a numeric value greater than 1, it will "die"
461 on errors instead of "warn". If set to anything unrecognized, it will
462 be silently ignored.
463
464 Future extensions to this feature will include more reliable auto-
465 detection of "autodie" being active in the scope of which the error
466 occurred which will increment the value of "auto_diag" with 1 the
467 moment the error is detected.
468
469 diag_verbose
470
471 my $csv = Text::CSV_XS->new ({ diag_verbose => 1 });
472 $csv->diag_verbose (2);
473 my $l = $csv->diag_verbose;
474
475 Set the verbosity of the output triggered by "auto_diag". Currently
476 only adds the current input-record-number (if known) to the
477 diagnostic output with an indication of the position of the error.
478
479 blank_is_undef
480
481 my $csv = Text::CSV_XS->new ({ blank_is_undef => 1 });
482 $csv->blank_is_undef (0);
483 my $f = $csv->blank_is_undef;
484
485 Under normal circumstances, "CSV" data makes no distinction between
486 quoted- and unquoted empty fields. These both end up in an empty
487 string field once read, thus
488
489 1,"",," ",2
490
491 is read as
492
493 ("1", "", "", " ", "2")
494
495 When writing "CSV" files with either "always_quote" or "quote_empty"
496 set, the unquoted empty field is the result of an undefined value.
497 To enable this distinction when reading "CSV" data, the
498 "blank_is_undef" attribute will cause unquoted empty fields to be set
499 to "undef", causing the above to be parsed as
500
501 ("1", "", undef, " ", "2")
502
503 note that this is specifically important when loading "CSV" fields
504 into a database that allows "NULL" values, as the perl equivalent for
505 "NULL" is "undef" in DBI land.
506
507 empty_is_undef
508
509 my $csv = Text::CSV_XS->new ({ empty_is_undef => 1 });
510 $csv->empty_is_undef (0);
511 my $f = $csv->empty_is_undef;
512
513 Going one step further than "blank_is_undef", this attribute
514 converts all empty fields to "undef", so
515
516 1,"",," ",2
517
518 is read as
519
520 (1, undef, undef, " ", 2)
521
522 Note that this effects only fields that are originally empty, not
523 fields that are empty after stripping allowed whitespace. YMMV.
524
525 allow_whitespace
526
527 my $csv = Text::CSV_XS->new ({ allow_whitespace => 1 });
528 $csv->allow_whitespace (0);
529 my $f = $csv->allow_whitespace;
530
531 When this option is set to true, the whitespace ("TAB"'s and
532 "SPACE"'s) surrounding the separation character is removed when
533 parsing. If either "TAB" or "SPACE" is one of the three characters
534 "sep_char", "quote_char", or "escape_char" it will not be considered
535 whitespace.
536
537 Now lines like:
538
539 1 , "foo" , bar , 3 , zapp
540
541 are parsed as valid "CSV", even though it violates the "CSV" specs.
542
543 Note that all whitespace is stripped from both start and end of
544 each field. That would make it more than a feature to enable parsing
545 bad "CSV" lines, as
546
547 1, 2.0, 3, ape , monkey
548
549 will now be parsed as
550
551 ("1", "2.0", "3", "ape", "monkey")
552
553 even if the original line was perfectly acceptable "CSV".
554
555 allow_loose_quotes
556
557 my $csv = Text::CSV_XS->new ({ allow_loose_quotes => 1 });
558 $csv->allow_loose_quotes (0);
559 my $f = $csv->allow_loose_quotes;
560
561 By default, parsing unquoted fields containing "quote_char" characters
562 like
563
564 1,foo "bar" baz,42
565
566 would result in parse error 2034. Though it is still bad practice to
567 allow this format, we cannot help the fact that some vendors
568 make their applications spit out lines styled this way.
569
570 If there is really bad "CSV" data, like
571
572 1,"foo "bar" baz",42
573
574 or
575
576 1,""foo bar baz"",42
577
578 there is a way to get this data-line parsed and leave the quotes inside
579 the quoted field as-is. This can be achieved by setting
580 "allow_loose_quotes" AND making sure that the "escape_char" is not
581 equal to "quote_char".
582
583 allow_loose_escapes
584
585 my $csv = Text::CSV_XS->new ({ allow_loose_escapes => 1 });
586 $csv->allow_loose_escapes (0);
587 my $f = $csv->allow_loose_escapes;
588
589 Parsing fields that have "escape_char" characters that escape
590 characters that do not need to be escaped, like:
591
592 my $csv = Text::CSV_XS->new ({ escape_char => "\\" });
593 $csv->parse (qq{1,"my bar\'s",baz,42});
594
595 would result in parse error 2025. Though it is bad practice to allow
596 this format, this attribute enables you to treat all escape character
597 sequences equal.
598
599 allow_unquoted_escape
600
601 my $csv = Text::CSV_XS->new ({ allow_unquoted_escape => 1 });
602 $csv->allow_unquoted_escape (0);
603 my $f = $csv->allow_unquoted_escape;
604
605 A backward compatibility issue where "escape_char" differs from
606 "quote_char" prevents "escape_char" to be in the first position of a
607 field. If "quote_char" is equal to the default """ and "escape_char"
608 is set to "\", this would be illegal:
609
610 1,\0,2
611
612 Setting this attribute to 1 might help to overcome issues with
613 backward compatibility and allow this style.
614
615 always_quote
616
617 my $csv = Text::CSV_XS->new ({ always_quote => 1 });
618 $csv->always_quote (0);
619 my $f = $csv->always_quote;
620
621 By default the generated fields are quoted only if they need to be.
622 For example, if they contain the separator character. If you set this
623 attribute to 1 then all defined fields will be quoted. ("undef" fields
624 are not quoted, see "blank_is_undef"). This makes it quite often easier
625 to handle exported data in external applications. (Poor creatures who
626 are better to use Text::CSV_XS. :)
627
628 quote_space
629
630 my $csv = Text::CSV_XS->new ({ quote_space => 1 });
631 $csv->quote_space (0);
632 my $f = $csv->quote_space;
633
634 By default, a space in a field would trigger quotation. As no rule
635 exists this to be forced in "CSV", nor any for the opposite, the
636 default is true for safety. You can exclude the space from this
637 trigger by setting this attribute to 0.
638
639 quote_empty
640
641 my $csv = Text::CSV_XS->new ({ quote_empty => 1 });
642 $csv->quote_empty (0);
643 my $f = $csv->quote_empty;
644
645 By default the generated fields are quoted only if they need to be.
646 An empty (defined) field does not need quotation. If you set this
647 attribute to 1 then empty defined fields will be quoted. ("undef"
648 fields are not quoted, see "blank_is_undef"). See also "always_quote".
649
650 quote_binary
651
652 my $csv = Text::CSV_XS->new ({ quote_binary => 1 });
653 $csv->quote_binary (0);
654 my $f = $csv->quote_binary;
655
656 By default, all "unsafe" bytes inside a string cause the combined
657 field to be quoted. By setting this attribute to 0, you can disable
658 that trigger for bytes >= 0x7F.
659
660 escape_null
661
662 my $csv = Text::CSV_XS->new ({ escape_null => 1 });
663 $csv->escape_null (0);
664 my $f = $csv->escape_null;
665
666 By default, a "NULL" byte in a field would be escaped. This option
667 enables you to treat the "NULL" byte as a simple binary character in
668 binary mode (the "{ binary => 1 }" is set). The default is true. You
669 can prevent "NULL" escapes by setting this attribute to 0.
670
671 When the "escape_char" attribute is set to undefined, this attribute
672 will be set to false.
673
674 The default setting will encode "=\x00=" as
675
676 "="0="
677
678 With "escape_null" set, this will result in
679
680 "=\x00="
681
682 The default when using the "csv" function is "false".
683
684 For backward compatibility reasons, the deprecated old name
685 "quote_null" is still recognized.
686
687 keep_meta_info
688
689 my $csv = Text::CSV_XS->new ({ keep_meta_info => 1 });
690 $csv->keep_meta_info (0);
691 my $f = $csv->keep_meta_info;
692
693 By default, the parsing of input records is as simple and fast as
694 possible. However, some parsing information - like quotation of the
695 original field - is lost in that process. Setting this flag to true
696 enables retrieving that information after parsing with the methods
697 "meta_info", "is_quoted", and "is_binary" described below. Default is
698 false for performance.
699
700 If you set this attribute to a value greater than 9, than you can
701 control output quotation style like it was used in the input of the the
702 last parsed record (unless quotation was added because of other
703 reasons).
704
705 my $csv = Text::CSV_XS->new ({
706 binary => 1,
707 keep_meta_info => 1,
708 quote_space => 0,
709 });
710
711 my $row = $csv->parse (q{1,,"", ," ",f,"g","h""h",help,"help"});
712
713 $csv->print (*STDOUT, \@row);
714 # 1,,, , ,f,g,"h""h",help,help
715 $csv->keep_meta_info (11);
716 $csv->print (*STDOUT, \@row);
717 # 1,,"", ," ",f,"g","h""h",help,"help"
718
719 undef_str
720
721 my $csv = Text::CSV_XS->new ({ undef_str => "\\N" });
722 $csv->undef_str (undef);
723 my $s = $csv->undef_str;
724
725 This attribute optionally defines the output of undefined fields. The
726 value passed is not changed at all, so if it needs quotation, the
727 quotation needs to be included in the value of the attribute. Use with
728 caution, as passing a value like ",",,,,""" will for sure mess up
729 your output. The default for this attribute is "undef", meaning no
730 special treatment.
731
732 This attribute is useful when exporting CSV data to be imported in
733 custom loaders, like for MySQL, that recognize special sequences for
734 "NULL" data.
735
736 verbatim
737
738 my $csv = Text::CSV_XS->new ({ verbatim => 1 });
739 $csv->verbatim (0);
740 my $f = $csv->verbatim;
741
742 This is a quite controversial attribute to set, but makes some hard
743 things possible.
744
745 The rationale behind this attribute is to tell the parser that the
746 normally special characters newline ("NL") and Carriage Return ("CR")
747 will not be special when this flag is set, and be dealt with as being
748 ordinary binary characters. This will ease working with data with
749 embedded newlines.
750
751 When "verbatim" is used with "getline", "getline" auto-"chomp"'s
752 every line.
753
754 Imagine a file format like
755
756 M^^Hans^Janssen^Klas 2\n2A^Ja^11-06-2007#\r\n
757
758 where, the line ending is a very specific "#\r\n", and the sep_char is
759 a "^" (caret). None of the fields is quoted, but embedded binary
760 data is likely to be present. With the specific line ending, this
761 should not be too hard to detect.
762
763 By default, Text::CSV_XS' parse function is instructed to only know
764 about "\n" and "\r" to be legal line endings, and so has to deal with
765 the embedded newline as a real "end-of-line", so it can scan the next
766 line if binary is true, and the newline is inside a quoted field. With
767 this option, we tell "parse" to parse the line as if "\n" is just
768 nothing more than a binary character.
769
770 For "parse" this means that the parser has no more idea about line
771 ending and "getline" "chomp"s line endings on reading.
772
773 types
774
775 A set of column types; the attribute is immediately passed to the
776 "types" method.
777
778 callbacks
779
780 See the "Callbacks" section below.
781
782 accessors
783
784 To sum it up,
785
786 $csv = Text::CSV_XS->new ();
787
788 is equivalent to
789
790 $csv = Text::CSV_XS->new ({
791 eol => undef, # \r, \n, or \r\n
792 sep_char => ',',
793 sep => undef,
794 quote_char => '"',
795 quote => undef,
796 escape_char => '"',
797 binary => 0,
798 decode_utf8 => 1,
799 auto_diag => 0,
800 diag_verbose => 0,
801 blank_is_undef => 0,
802 empty_is_undef => 0,
803 allow_whitespace => 0,
804 allow_loose_quotes => 0,
805 allow_loose_escapes => 0,
806 allow_unquoted_escape => 0,
807 always_quote => 0,
808 quote_empty => 0,
809 quote_space => 1,
810 escape_null => 1,
811 quote_binary => 1,
812 keep_meta_info => 0,
813 verbatim => 0,
814 undef_str => undef,
815 types => undef,
816 callbacks => undef,
817 });
818
819 For all of the above mentioned flags, an accessor method is available
820 where you can inquire the current value, or change the value
821
822 my $quote = $csv->quote_char;
823 $csv->binary (1);
824
825 It is not wise to change these settings halfway through writing "CSV"
826 data to a stream. If however you want to create a new stream using the
827 available "CSV" object, there is no harm in changing them.
828
829 If the "new" constructor call fails, it returns "undef", and makes
830 the fail reason available through the "error_diag" method.
831
832 $csv = Text::CSV_XS->new ({ ecs_char => 1 }) or
833 die "".Text::CSV_XS->error_diag ();
834
835 "error_diag" will return a string like
836
837 "INI - Unknown attribute 'ecs_char'"
838
839 known_attributes
840 @attr = Text::CSV_XS->known_attributes;
841 @attr = Text::CSV_XS::known_attributes;
842 @attr = $csv->known_attributes;
843
844 This method will return an ordered list of all the supported
845 attributes as described above. This can be useful for knowing what
846 attributes are valid in classes that use or extend Text::CSV_XS.
847
848 print
849 $status = $csv->print ($fh, $colref);
850
851 Similar to "combine" + "string" + "print", but much more efficient.
852 It expects an array ref as input (not an array!) and the resulting
853 string is not really created, but immediately written to the $fh
854 object, typically an IO handle or any other object that offers a
855 "print" method.
856
857 For performance reasons "print" does not create a result string, so
858 all "string", "status", "fields", and "error_input" methods will return
859 undefined information after executing this method.
860
861 If $colref is "undef" (explicit, not through a variable argument) and
862 "bind_columns" was used to specify fields to be printed, it is
863 possible to make performance improvements, as otherwise data would have
864 to be copied as arguments to the method call:
865
866 $csv->bind_columns (\($foo, $bar));
867 $status = $csv->print ($fh, undef);
868
869 A short benchmark
870
871 my @data = ("aa" .. "zz");
872 $csv->bind_columns (\(@data));
873
874 $csv->print ($fh, [ @data ]); # 11800 recs/sec
875 $csv->print ($fh, \@data ); # 57600 recs/sec
876 $csv->print ($fh, undef ); # 48500 recs/sec
877
878 say
879 $status = $csv->say ($fh, $colref);
880
881 Like "print", but "eol" defaults to "$\".
882
883 print_hr
884 $csv->print_hr ($fh, $ref);
885
886 Provides an easy way to print a $ref (as fetched with "getline_hr")
887 provided the column names are set with "column_names".
888
889 It is just a wrapper method with basic parameter checks over
890
891 $csv->print ($fh, [ map { $ref->{$_} } $csv->column_names ]);
892
893 combine
894 $status = $csv->combine (@fields);
895
896 This method constructs a "CSV" record from @fields, returning success
897 or failure. Failure can result from lack of arguments or an argument
898 that contains an invalid character. Upon success, "string" can be
899 called to retrieve the resultant "CSV" string. Upon failure, the
900 value returned by "string" is undefined and "error_input" could be
901 called to retrieve the invalid argument.
902
903 string
904 $line = $csv->string ();
905
906 This method returns the input to "parse" or the resultant "CSV"
907 string of "combine", whichever was called more recently.
908
909 getline
910 $colref = $csv->getline ($fh);
911
912 This is the counterpart to "print", as "parse" is the counterpart to
913 "combine": it parses a row from the $fh handle using the "getline"
914 method associated with $fh and parses this row into an array ref.
915 This array ref is returned by the function or "undef" for failure.
916 When $fh does not support "getline", you are likely to hit errors.
917
918 When fields are bound with "bind_columns" the return value is a
919 reference to an empty list.
920
921 The "string", "fields", and "status" methods are meaningless again.
922
923 getline_all
924 $arrayref = $csv->getline_all ($fh);
925 $arrayref = $csv->getline_all ($fh, $offset);
926 $arrayref = $csv->getline_all ($fh, $offset, $length);
927
928 This will return a reference to a list of getline ($fh) results. In
929 this call, "keep_meta_info" is disabled. If $offset is negative, as
930 with "splice", only the last "abs ($offset)" records of $fh are taken
931 into consideration.
932
933 Given a CSV file with 10 lines:
934
935 lines call
936 ----- ---------------------------------------------------------
937 0..9 $csv->getline_all ($fh) # all
938 0..9 $csv->getline_all ($fh, 0) # all
939 8..9 $csv->getline_all ($fh, 8) # start at 8
940 - $csv->getline_all ($fh, 0, 0) # start at 0 first 0 rows
941 0..4 $csv->getline_all ($fh, 0, 5) # start at 0 first 5 rows
942 4..5 $csv->getline_all ($fh, 4, 2) # start at 4 first 2 rows
943 8..9 $csv->getline_all ($fh, -2) # last 2 rows
944 6..7 $csv->getline_all ($fh, -4, 2) # first 2 of last 4 rows
945
946 getline_hr
947 The "getline_hr" and "column_names" methods work together to allow you
948 to have rows returned as hashrefs. You must call "column_names" first
949 to declare your column names.
950
951 $csv->column_names (qw( code name price description ));
952 $hr = $csv->getline_hr ($fh);
953 print "Price for $hr->{name} is $hr->{price} EUR\n";
954
955 "getline_hr" will croak if called before "column_names".
956
957 Note that "getline_hr" creates a hashref for every row and will be
958 much slower than the combined use of "bind_columns" and "getline" but
959 still offering the same ease of use hashref inside the loop:
960
961 my @cols = @{$csv->getline ($fh)};
962 $csv->column_names (@cols);
963 while (my $row = $csv->getline_hr ($fh)) {
964 print $row->{price};
965 }
966
967 Could easily be rewritten to the much faster:
968
969 my @cols = @{$csv->getline ($fh)};
970 my $row = {};
971 $csv->bind_columns (\@{$row}{@cols});
972 while ($csv->getline ($fh)) {
973 print $row->{price};
974 }
975
976 Your mileage may vary for the size of the data and the number of rows.
977 With perl-5.14.2 the comparison for a 100_000 line file with 14 rows:
978
979 Rate hashrefs getlines
980 hashrefs 1.00/s -- -76%
981 getlines 4.15/s 313% --
982
983 getline_hr_all
984 $arrayref = $csv->getline_hr_all ($fh);
985 $arrayref = $csv->getline_hr_all ($fh, $offset);
986 $arrayref = $csv->getline_hr_all ($fh, $offset, $length);
987
988 This will return a reference to a list of getline_hr ($fh) results.
989 In this call, "keep_meta_info" is disabled.
990
991 parse
992 $status = $csv->parse ($line);
993
994 This method decomposes a "CSV" string into fields, returning success
995 or failure. Failure can result from a lack of argument or the given
996 "CSV" string is improperly formatted. Upon success, "fields" can be
997 called to retrieve the decomposed fields. Upon failure calling "fields"
998 will return undefined data and "error_input" can be called to
999 retrieve the invalid argument.
1000
1001 You may use the "types" method for setting column types. See "types"'
1002 description below.
1003
1004 The $line argument is supposed to be a simple scalar. Everything else
1005 is supposed to croak and set error 1500.
1006
1007 fragment
1008 This function tries to implement RFC7111 (URI Fragment Identifiers for
1009 the text/csv Media Type) - http://tools.ietf.org/html/rfc7111
1010
1011 my $AoA = $csv->fragment ($fh, $spec);
1012
1013 In specifications, "*" is used to specify the last item, a dash ("-")
1014 to indicate a range. All indices are 1-based: the first row or
1015 column has index 1. Selections can be combined with the semi-colon
1016 (";").
1017
1018 When using this method in combination with "column_names", the
1019 returned reference will point to a list of hashes instead of a list
1020 of lists. A disjointed cell-based combined selection might return
1021 rows with different number of columns making the use of hashes
1022 unpredictable.
1023
1024 $csv->column_names ("Name", "Age");
1025 my $AoH = $csv->fragment ($fh, "col=3;8");
1026
1027 If the "after_parse" callback is active, it is also called on every
1028 line parsed and skipped before the fragment.
1029
1030 row
1031 row=4
1032 row=5-7
1033 row=6-*
1034 row=1-2;4;6-*
1035
1036 col
1037 col=2
1038 col=1-3
1039 col=4-*
1040 col=1-2;4;7-*
1041
1042 cell
1043 In cell-based selection, the comma (",") is used to pair row and
1044 column
1045
1046 cell=4,1
1047
1048 The range operator ("-") using "cell"s can be used to define top-left
1049 and bottom-right "cell" location
1050
1051 cell=3,1-4,6
1052
1053 The "*" is only allowed in the second part of a pair
1054
1055 cell=3,2-*,2 # row 3 till end, only column 2
1056 cell=3,2-3,* # column 2 till end, only row 3
1057 cell=3,2-*,* # strip row 1 and 2, and column 1
1058
1059 Cells and cell ranges may be combined with ";", possibly resulting in
1060 rows with different number of columns
1061
1062 cell=1,1-2,2;3,3-4,4;1,4;4,1
1063
1064 Disjointed selections will only return selected cells. The cells
1065 that are not specified will not be included in the returned
1066 set, not even as "undef". As an example given a "CSV" like
1067
1068 11,12,13,...19
1069 21,22,...28,29
1070 : :
1071 91,...97,98,99
1072
1073 with "cell=1,1-2,2;3,3-4,4;1,4;4,1" will return:
1074
1075 11,12,14
1076 21,22
1077 33,34
1078 41,43,44
1079
1080 Overlapping cell-specs will return those cells only once, So
1081 "cell=1,1-3,3;2,2-4,4;2,3;4,2" will return:
1082
1083 11,12,13
1084 21,22,23,24
1085 31,32,33,34
1086 42,43,44
1087
1088 RFC7111 <http://tools.ietf.org/html/rfc7111> does not allow different
1089 types of specs to be combined (either "row" or "col" or "cell").
1090 Passing an invalid fragment specification will croak and set error
1091 2013.
1092
1093 column_names
1094 Set the "keys" that will be used in the "getline_hr" calls. If no
1095 keys (column names) are passed, it will return the current setting as a
1096 list.
1097
1098 "column_names" accepts a list of scalars (the column names) or a
1099 single array_ref, so you can pass the return value from "getline" too:
1100
1101 $csv->column_names ($csv->getline ($fh));
1102
1103 "column_names" does no checking on duplicates at all, which might lead
1104 to unexpected results. Undefined entries will be replaced with the
1105 string "\cAUNDEF\cA", so
1106
1107 $csv->column_names (undef, "", "name", "name");
1108 $hr = $csv->getline_hr ($fh);
1109
1110 Will set "$hr->{"\cAUNDEF\cA"}" to the 1st field, "$hr->{""}" to the
1111 2nd field, and "$hr->{name}" to the 4th field, discarding the 3rd
1112 field.
1113
1114 "column_names" croaks on invalid arguments.
1115
1116 header
1117 This method does NOT work in perl-5.6.x
1118
1119 Parse the CSV header and set "sep", column_names and encoding.
1120
1121 my @hdr = $csv->header ($fh);
1122 $csv->header ($fh, { sep_set => [ ";", ",", "|", "\t" ] });
1123 $csv->header ($fh, { detect_bom => 1, munge_column_names => "lc" });
1124
1125 The first argument should be a file handle.
1126
1127 This method resets some object properties, as it is supposed to be
1128 invoked only once per file or stream. It will leave attributes
1129 "column_names" and "bound_columns" alone of setting column names is
1130 disabled. Reading headers on previously process objects might fail on
1131 perl-5.8.0 and older.
1132
1133 Assuming that the file opened for parsing has a header, and the header
1134 does not contain problematic characters like embedded newlines, read
1135 the first line from the open handle then auto-detect whether the header
1136 separates the column names with a character from the allowed separator
1137 list.
1138
1139 If any of the allowed separators matches, and none of the other
1140 allowed separators match, set "sep" to that separator for the
1141 current CSV_XS instance and use it to parse the first line, map those
1142 to lowercase, and use that to set the instance "column_names":
1143
1144 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
1145 open my $fh, "<", "file.csv";
1146 binmode $fh; # for Windows
1147 $csv->header ($fh);
1148 while (my $row = $csv->getline_hr ($fh)) {
1149 ...
1150 }
1151
1152 If the header is empty, contains more than one unique separator out of
1153 the allowed set, contains empty fields, or contains identical fields
1154 (after folding), it will croak with error 1010, 1011, 1012, or 1013
1155 respectively.
1156
1157 If the header contains embedded newlines or is not valid CSV in any
1158 other way, this method will croak and leave the parse error untouched.
1159
1160 A successful call to "header" will always set the "sep" of the $csv
1161 object. This behavior can not be disabled.
1162
1163 return value
1164
1165 On error this method will croak.
1166
1167 In list context, the headers will be returned whether they are used to
1168 set "column_names" or not.
1169
1170 In scalar context, the instance itself is returned. Note: the values
1171 as found in the header will effectively be lost if "set_column_names"
1172 is false.
1173
1174 Options
1175
1176 sep_set
1177 $csv->header ($fh, { sep_set => [ ";", ",", "|", "\t" ] });
1178
1179 The list of legal separators defaults to "[ ";", "," ]" and can be
1180 changed by this option. As this is probably the most often used
1181 option, it can be passed on its own as an unnamed argument:
1182
1183 $csv->header ($fh, [ ";", ",", "|", "\t", "::", "\x{2063}" ]);
1184
1185 Multi-byte sequences are allowed, both multi-character and
1186 Unicode. See "sep".
1187
1188 detect_bom
1189 $csv->header ($fh, { detect_bom => 1 });
1190
1191 The default behavior is to detect if the header line starts with a
1192 BOM. If the header has a BOM, use that to set the encoding of $fh.
1193 This default behavior can be disabled by passing a false value to
1194 "detect_bom".
1195
1196 Supported encodings from BOM are: UTF-8, UTF-16BE, UTF-16LE,
1197 UTF-32BE, and UTF-32LE. BOM's also support UTF-1, UTF-EBCDIC, SCSU,
1198 BOCU-1, and GB-18030 but Encode does not (yet). UTF-7 is not
1199 supported.
1200
1201 If a supported BOM was detected as start of the stream, it is stored
1202 in the abject attribute "ENCODING".
1203
1204 my $enc = $csv->{ENCODING};
1205
1206 The encoding is used with "binmode" on $fh.
1207
1208 If the handle was opened in a (correct) encoding, this method will
1209 not alter the encoding, as it checks the leading bytes of the first
1210 line. In case the stream starts with a decode BOM ("U+FEFF"),
1211 "{ENCODING}" will be "" (empty) instead of the default "undef".
1212
1213 munge_column_names
1214 This option offers the means to modify the column names into
1215 something that is most useful to the application. The default is to
1216 map all column names to lower case.
1217
1218 $csv->header ($fh, { munge_column_names => "lc" });
1219
1220 The following values are available:
1221
1222 lc - lower case
1223 uc - upper case
1224 none - do not change
1225 \%hash - supply a mapping
1226 \&cb - supply a callback
1227
1228 Literal:
1229
1230 $csv->header ($fh, { munge_column_names => "none" });
1231
1232 Hash:
1233
1234 $csv->header ($fh, { munge_column_names => { foo => "sombrero" });
1235
1236 if a value does not exist, the original value is used unchanged
1237
1238 Callback:
1239
1240 $csv->header ($fh, { munge_column_names => sub { fc } });
1241 $csv->header ($fh, { munge_column_names => sub { "column_".$col++ } });
1242 $csv->header ($fh, { munge_column_names => sub { lc (s/\W+/_/gr) } });
1243
1244 As this callback is called in a "map", you can use $_ directly.
1245
1246 set_column_names
1247 $csv->header ($fh, { set_column_names => 1 });
1248
1249 The default is to set the instances column names using
1250 "column_names" if the method is successful, so subsequent calls to
1251 "getline_hr" can return a hash. Disable setting the header can be
1252 forced by using a false value for this option.
1253
1254 As described in "return value" above, content is lost in scalar
1255 context.
1256
1257 Validation
1258
1259 When receiving CSV files from external sources, this method can be
1260 used to protect against changes in the layout by restricting to known
1261 headers (and typos in the header fields).
1262
1263 my %known = (
1264 "record key" => "c_rec",
1265 "rec id" => "c_rec",
1266 "id_rec" => "c_rec",
1267 "kode" => "code",
1268 "code" => "code",
1269 "vaule" => "value",
1270 "value" => "value",
1271 );
1272 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
1273 open my $fh, "<", $source or die "$source: $!";
1274 $csv->header ($fh, { munge_column_names => sub {
1275 s/\s+$//;
1276 s/^\s+//;
1277 $known{lc $_} or die "Unknown column '$_' in $source";
1278 }});
1279 while (my $row = $csv->getline_hr ($fh)) {
1280 say join "\t", $row->{c_rec}, $row->{code}, $row->{value};
1281 }
1282
1283 bind_columns
1284 Takes a list of scalar references to be used for output with "print"
1285 or to store in the fields fetched by "getline". When you do not pass
1286 enough references to store the fetched fields in, "getline" will fail
1287 with error 3006. If you pass more than there are fields to return,
1288 the content of the remaining references is left untouched.
1289
1290 $csv->bind_columns (\$code, \$name, \$price, \$description);
1291 while ($csv->getline ($fh)) {
1292 print "The price of a $name is \x{20ac} $price\n";
1293 }
1294
1295 To reset or clear all column binding, call "bind_columns" with the
1296 single argument "undef". This will also clear column names.
1297
1298 $csv->bind_columns (undef);
1299
1300 If no arguments are passed at all, "bind_columns" will return the list
1301 of current bindings or "undef" if no binds are active.
1302
1303 Note that in parsing with "bind_columns", the fields are set on the
1304 fly. That implies that if the third field of a row causes an error
1305 (or this row has just two fields where the previous row had more), the
1306 first two fields already have been assigned the values of the current
1307 row, while the rest of the fields will still hold the values of the
1308 previous row. If you want the parser to fail in these cases, use the
1309 "strict" attribute.
1310
1311 eof
1312 $eof = $csv->eof ();
1313
1314 If "parse" or "getline" was used with an IO stream, this method will
1315 return true (1) if the last call hit end of file, otherwise it will
1316 return false (''). This is useful to see the difference between a
1317 failure and end of file.
1318
1319 Note that if the parsing of the last line caused an error, "eof" is
1320 still true. That means that if you are not using "auto_diag", an idiom
1321 like
1322
1323 while (my $row = $csv->getline ($fh)) {
1324 # ...
1325 }
1326 $csv->eof or $csv->error_diag;
1327
1328 will not report the error. You would have to change that to
1329
1330 while (my $row = $csv->getline ($fh)) {
1331 # ...
1332 }
1333 +$csv->error_diag and $csv->error_diag;
1334
1335 types
1336 $csv->types (\@tref);
1337
1338 This method is used to force that (all) columns are of a given type.
1339 For example, if you have an integer column, two columns with
1340 doubles and a string column, then you might do a
1341
1342 $csv->types ([Text::CSV_XS::IV (),
1343 Text::CSV_XS::NV (),
1344 Text::CSV_XS::NV (),
1345 Text::CSV_XS::PV ()]);
1346
1347 Column types are used only for decoding columns while parsing, in
1348 other words by the "parse" and "getline" methods.
1349
1350 You can unset column types by doing a
1351
1352 $csv->types (undef);
1353
1354 or fetch the current type settings with
1355
1356 $types = $csv->types ();
1357
1358 IV Set field type to integer.
1359
1360 NV Set field type to numeric/float.
1361
1362 PV Set field type to string.
1363
1364 fields
1365 @columns = $csv->fields ();
1366
1367 This method returns the input to "combine" or the resultant
1368 decomposed fields of a successful "parse", whichever was called more
1369 recently.
1370
1371 Note that the return value is undefined after using "getline", which
1372 does not fill the data structures returned by "parse".
1373
1374 meta_info
1375 @flags = $csv->meta_info ();
1376
1377 This method returns the "flags" of the input to "combine" or the flags
1378 of the resultant decomposed fields of "parse", whichever was called
1379 more recently.
1380
1381 For each field, a meta_info field will hold flags that inform
1382 something about the field returned by the "fields" method or
1383 passed to the "combine" method. The flags are bit-wise-"or"'d like:
1384
1385 " "0x0001
1386 The field was quoted.
1387
1388 " "0x0002
1389 The field was binary.
1390
1391 See the "is_***" methods below.
1392
1393 is_quoted
1394 my $quoted = $csv->is_quoted ($column_idx);
1395
1396 Where $column_idx is the (zero-based) index of the column in the
1397 last result of "parse".
1398
1399 This returns a true value if the data in the indicated column was
1400 enclosed in "quote_char" quotes. This might be important for fields
1401 where content ",20070108," is to be treated as a numeric value, and
1402 where ","20070108"," is explicitly marked as character string data.
1403
1404 This method is only valid when "keep_meta_info" is set to a true value.
1405
1406 is_binary
1407 my $binary = $csv->is_binary ($column_idx);
1408
1409 Where $column_idx is the (zero-based) index of the column in the
1410 last result of "parse".
1411
1412 This returns a true value if the data in the indicated column contained
1413 any byte in the range "[\x00-\x08,\x10-\x1F,\x7F-\xFF]".
1414
1415 This method is only valid when "keep_meta_info" is set to a true value.
1416
1417 is_missing
1418 my $missing = $csv->is_missing ($column_idx);
1419
1420 Where $column_idx is the (zero-based) index of the column in the
1421 last result of "getline_hr".
1422
1423 $csv->keep_meta_info (1);
1424 while (my $hr = $csv->getline_hr ($fh)) {
1425 $csv->is_missing (0) and next; # This was an empty line
1426 }
1427
1428 When using "getline_hr", it is impossible to tell if the parsed
1429 fields are "undef" because they where not filled in the "CSV" stream
1430 or because they were not read at all, as all the fields defined by
1431 "column_names" are set in the hash-ref. If you still need to know if
1432 all fields in each row are provided, you should enable "keep_meta_info"
1433 so you can check the flags.
1434
1435 If "keep_meta_info" is "false", "is_missing" will always return
1436 "undef", regardless of $column_idx being valid or not. If this
1437 attribute is "true" it will return either 0 (the field is present) or 1
1438 (the field is missing).
1439
1440 A special case is the empty line. If the line is completely empty -
1441 after dealing with the flags - this is still a valid CSV line: it is a
1442 record of just one single empty field. However, if "keep_meta_info" is
1443 set, invoking "is_missing" with index 0 will now return true.
1444
1445 status
1446 $status = $csv->status ();
1447
1448 This method returns the status of the last invoked "combine" or "parse"
1449 call. Status is success (true: 1) or failure (false: "undef" or 0).
1450
1451 error_input
1452 $bad_argument = $csv->error_input ();
1453
1454 This method returns the erroneous argument (if it exists) of "combine"
1455 or "parse", whichever was called more recently. If the last
1456 invocation was successful, "error_input" will return "undef".
1457
1458 error_diag
1459 Text::CSV_XS->error_diag ();
1460 $csv->error_diag ();
1461 $error_code = 0 + $csv->error_diag ();
1462 $error_str = "" . $csv->error_diag ();
1463 ($cde, $str, $pos, $rec, $fld) = $csv->error_diag ();
1464
1465 If (and only if) an error occurred, this function returns the
1466 diagnostics of that error.
1467
1468 If called in void context, this will print the internal error code and
1469 the associated error message to STDERR.
1470
1471 If called in list context, this will return the error code and the
1472 error message in that order. If the last error was from parsing, the
1473 rest of the values returned are a best guess at the location within
1474 the line that was being parsed. Their values are 1-based. The
1475 position currently is index of the byte at which the parsing failed in
1476 the current record. It might change to be the index of the current
1477 character in a later release. The records is the index of the record
1478 parsed by the csv instance. The field number is the index of the field
1479 the parser thinks it is currently trying to parse. See
1480 examples/csv-check for how this can be used.
1481
1482 If called in scalar context, it will return the diagnostics in a
1483 single scalar, a-la $!. It will contain the error code in numeric
1484 context, and the diagnostics message in string context.
1485
1486 When called as a class method or a direct function call, the
1487 diagnostics are that of the last "new" call.
1488
1489 record_number
1490 $recno = $csv->record_number ();
1491
1492 Returns the records parsed by this csv instance. This value should be
1493 more accurate than $. when embedded newlines come in play. Records
1494 written by this instance are not counted.
1495
1496 SetDiag
1497 $csv->SetDiag (0);
1498
1499 Use to reset the diagnostics if you are dealing with errors.
1500
1502 csv
1503 This function is not exported by default and should be explicitly
1504 requested:
1505
1506 use Text::CSV_XS qw( csv );
1507
1508 This is an high-level function that aims at simple (user) interfaces.
1509 This can be used to read/parse a "CSV" file or stream (the default
1510 behavior) or to produce a file or write to a stream (define the "out"
1511 attribute). It returns an array- or hash-reference on parsing (or
1512 "undef" on fail) or the numeric value of "error_diag" on writing.
1513 When this function fails you can get to the error using the class call
1514 to "error_diag"
1515
1516 my $aoa = csv (in => "test.csv") or
1517 die Text::CSV_XS->error_diag;
1518
1519 This function takes the arguments as key-value pairs. This can be
1520 passed as a list or as an anonymous hash:
1521
1522 my $aoa = csv ( in => "test.csv", sep_char => ";");
1523 my $aoh = csv ({ in => $fh, headers => "auto" });
1524
1525 The arguments passed consist of two parts: the arguments to "csv"
1526 itself and the optional attributes to the "CSV" object used inside
1527 the function as enumerated and explained in "new".
1528
1529 If not overridden, the default option used for CSV is
1530
1531 auto_diag => 1
1532 escape_null => 0
1533
1534 The option that is always set and cannot be altered is
1535
1536 binary => 1
1537
1538 As this function will likely be used in one-liners, it allows "quote"
1539 to be abbreviated as "quo", and "escape_char" to be abbreviated as
1540 "esc" or "escape".
1541
1542 Alternative invocations:
1543
1544 my $aoa = Text::CSV_XS::csv (in => "file.csv");
1545
1546 my $csv = Text::CSV_XS->new ();
1547 my $aoa = $csv->csv (in => "file.csv");
1548
1549 In the latter case, the object attributes are used from the existing
1550 object and the attribute arguments in the function call are ignored:
1551
1552 my $csv = Text::CSV_XS->new ({ sep_char => ";" });
1553 my $aoh = $csv->csv (in => "file.csv", bom => 1);
1554
1555 will parse using ";" as "sep_char", not ",".
1556
1557 in
1558
1559 Used to specify the source. "in" can be a file name (e.g. "file.csv"),
1560 which will be opened for reading and closed when finished, a file
1561 handle (e.g. $fh or "FH"), a reference to a glob (e.g. "\*ARGV"),
1562 the glob itself (e.g. *STDIN), or a reference to a scalar (e.g.
1563 "\q{1,2,"csv"}").
1564
1565 When used with "out", "in" should be a reference to a CSV structure
1566 (AoA or AoH) or a CODE-ref that returns an array-reference or a hash-
1567 reference. The code-ref will be invoked with no arguments.
1568
1569 my $aoa = csv (in => "file.csv");
1570
1571 open my $fh, "<", "file.csv";
1572 my $aoa = csv (in => $fh);
1573
1574 my $csv = [ [qw( Foo Bar )], [ 1, 2 ], [ 2, 3 ]];
1575 my $err = csv (in => $csv, out => "file.csv");
1576
1577 If called in void context without the "out" attribute, the resulting
1578 ref will be used as input to a subsequent call to csv:
1579
1580 csv (in => "file.csv", filter => { 2 => sub { length > 2 }})
1581
1582 will be a shortcut to
1583
1584 csv (in => csv (in => "file.csv", filter => { 2 => sub { length > 2 }}))
1585
1586 where, in the absence of the "out" attribute, this is a shortcut to
1587
1588 csv (in => csv (in => "file.csv", filter => { 2 => sub { length > 2 }}),
1589 out => *STDOUT)
1590
1591 out
1592
1593 csv (in => $aoa, out => "file.csv");
1594 csv (in => $aoa, out => $fh);
1595 csv (in => $aoa, out => STDOUT);
1596 csv (in => $aoa, out => *STDOUT);
1597 csv (in => $aoa, out => \*STDOUT);
1598 csv (in => $aoa, out => \my $data);
1599 csv (in => $aoa, out => undef);
1600 csv (in => $aoa, out => \"skip");
1601
1602 In output mode, the default CSV options when producing CSV are
1603
1604 eol => "\r\n"
1605
1606 The "fragment" attribute is ignored in output mode.
1607
1608 "out" can be a file name (e.g. "file.csv"), which will be opened for
1609 writing and closed when finished, a file handle (e.g. $fh or "FH"), a
1610 reference to a glob (e.g. "\*STDOUT"), the glob itself (e.g. *STDOUT),
1611 or a reference to a scalar (e.g. "\my $data").
1612
1613 csv (in => sub { $sth->fetch }, out => "dump.csv");
1614 csv (in => sub { $sth->fetchrow_hashref }, out => "dump.csv",
1615 headers => $sth->{NAME_lc});
1616
1617 When a code-ref is used for "in", the output is generated per
1618 invocation, so no buffering is involved. This implies that there is no
1619 size restriction on the number of records. The "csv" function ends when
1620 the coderef returns a false value.
1621
1622 If "out" is set to a reference of the literal string "skip", the output
1623 will be suppressed completely, which might be useful in combination
1624 with a filter for side effects only.
1625
1626 my %cache;
1627 csv (in => "dump.csv",
1628 out => \"skip",
1629 on_in => sub { $cache{$_[1][1]}++ });
1630
1631 Currently, setting "out" to any false value ("undef", "", 0) will be
1632 equivalent to "\"skip"".
1633
1634 encoding
1635
1636 If passed, it should be an encoding accepted by the ":encoding()"
1637 option to "open". There is no default value. This attribute does not
1638 work in perl 5.6.x. "encoding" can be abbreviated to "enc" for ease of
1639 use in command line invocations.
1640
1641 If "encoding" is set to the literal value "auto", the method "header"
1642 will be invoked on the opened stream to check if there is a BOM and set
1643 the encoding accordingly. This is equal to passing a true value in
1644 the option "detect_bom".
1645
1646 detect_bom
1647
1648 If "detect_bom" is given, the method "header" will be invoked on
1649 the opened stream to check if there is a BOM and set the encoding
1650 accordingly.
1651
1652 "detect_bom" can be abbreviated to "bom".
1653
1654 This is the same as setting "encoding" to "auto".
1655
1656 Note that as the method "header" is invoked, its default is to also
1657 set the headers.
1658
1659 headers
1660
1661 If this attribute is not given, the default behavior is to produce an
1662 array of arrays.
1663
1664 If "headers" is supplied, it should be an anonymous list of column
1665 names, an anonymous hashref, a coderef, or a literal flag: "auto",
1666 "lc", "uc", or "skip".
1667
1668 skip
1669 When "skip" is used, the header will not be included in the output.
1670
1671 my $aoa = csv (in => $fh, headers => "skip");
1672
1673 auto
1674 If "auto" is used, the first line of the "CSV" source will be read as
1675 the list of field headers and used to produce an array of hashes.
1676
1677 my $aoh = csv (in => $fh, headers => "auto");
1678
1679 lc
1680 If "lc" is used, the first line of the "CSV" source will be read as
1681 the list of field headers mapped to lower case and used to produce
1682 an array of hashes. This is a variation of "auto".
1683
1684 my $aoh = csv (in => $fh, headers => "lc");
1685
1686 uc
1687 If "uc" is used, the first line of the "CSV" source will be read as
1688 the list of field headers mapped to upper case and used to produce
1689 an array of hashes. This is a variation of "auto".
1690
1691 my $aoh = csv (in => $fh, headers => "uc");
1692
1693 CODE
1694 If a coderef is used, the first line of the "CSV" source will be
1695 read as the list of mangled field headers in which each field is
1696 passed as the only argument to the coderef. This list is used to
1697 produce an array of hashes.
1698
1699 my $aoh = csv (in => $fh,
1700 headers => sub { lc ($_[0]) =~ s/kode/code/gr });
1701
1702 this example is a variation of using "lc" where all occurrences of
1703 "kode" are replaced with "code".
1704
1705 ARRAY
1706 If "headers" is an anonymous list, the entries in the list will be
1707 used as field names. The first line is considered data instead of
1708 headers.
1709
1710 my $aoh = csv (in => $fh, headers => [qw( Foo Bar )]);
1711 csv (in => $aoa, out => $fh, headers => [qw( code description price )]);
1712
1713 HASH
1714 If "headers" is an hash reference, this implies "auto", but header
1715 fields for that exist as key in the hashref will be replaced by the
1716 value for that key. Given a CSV file like
1717
1718 post-kode,city,name,id number,fubble
1719 1234AA,Duckstad,Donald,13,"X313DF"
1720
1721 using
1722
1723 csv (headers => { "post-kode" => "pc", "id number" => "ID" }, ...
1724
1725 will return an entry like
1726
1727 { pc => "1234AA",
1728 city => "Duckstad",
1729 name => "Donald",
1730 ID => "13",
1731 fubble => "X313DF",
1732 }
1733
1734 See also "munge_column_names" and "set_column_names".
1735
1736 munge_column_names
1737
1738 If "munge_column_names" is set, the method "header" is invoked on
1739 the opened stream with all matching arguments to detect and set the
1740 headers.
1741
1742 "munge_column_names" can be abbreviated to "munge".
1743
1744 key
1745
1746 If passed, will default "headers" to "auto" and return a hashref
1747 instead of an array of hashes.
1748
1749 my $ref = csv (in => "test.csv", key => "code");
1750
1751 with test.csv like
1752
1753 code,product,price,color
1754 1,pc,850,gray
1755 2,keyboard,12,white
1756 3,mouse,5,black
1757
1758 will return
1759
1760 { 1 => {
1761 code => 1,
1762 color => 'gray',
1763 price => 850,
1764 product => 'pc'
1765 },
1766 2 => {
1767 code => 2,
1768 color => 'white',
1769 price => 12,
1770 product => 'keyboard'
1771 },
1772 3 => {
1773 code => 3,
1774 color => 'black',
1775 price => 5,
1776 product => 'mouse'
1777 }
1778 }
1779
1780 The "key" attribute can be combined with "headers" for "CSV" date that
1781 has no header line, like
1782
1783 my $ref = csv (
1784 in => "foo.csv",
1785 headers => [qw( c_foo foo bar description stock )],
1786 key => "c_foo",
1787 );
1788
1789 keep_headers
1790
1791 When using hashes, keep the column names into the arrayref passed, so
1792 all headers are available after the call in the original order.
1793
1794 my $aoh = csv (in => "file.csv", keep_headers => \my @hdr);
1795
1796 This attribute can be abbreviated to "kh" or passed as
1797 "keep_column_names".
1798
1799 This attribute implies a default of "auto" for the "headers" attribute.
1800
1801 fragment
1802
1803 Only output the fragment as defined in the "fragment" method. This
1804 option is ignored when generating "CSV". See "out".
1805
1806 Combining all of them could give something like
1807
1808 use Text::CSV_XS qw( csv );
1809 my $aoh = csv (
1810 in => "test.txt",
1811 encoding => "utf-8",
1812 headers => "auto",
1813 sep_char => "|",
1814 fragment => "row=3;6-9;15-*",
1815 );
1816 say $aoh->[15]{Foo};
1817
1818 sep_set
1819
1820 If "sep_set" is set, the method "header" is invoked on the opened
1821 stream to detect and set "sep_char" with the given set.
1822
1823 "sep_set" can be abbreviated to "seps".
1824
1825 Note that as the "header" method is invoked, its default is to also
1826 set the headers.
1827
1828 set_column_names
1829
1830 If "set_column_names" is passed, the method "header" is invoked on
1831 the opened stream with all arguments meant for "header".
1832
1833 If "set_column_names" is passed as a false value, the content of the
1834 first row is only preserved if the output is AoA:
1835
1836 With an input-file like
1837
1838 bAr,foo
1839 1,2
1840 3,4,5
1841
1842 This call
1843
1844 my $aoa = csv (in => $file, set_column_names => 0);
1845
1846 will result in
1847
1848 [[ "bar", "foo" ],
1849 [ "1", "2" ],
1850 [ "3", "4", "5" ]]
1851
1852 and
1853
1854 my $aoa = csv (in => $file, set_column_names => 0, munge => "none");
1855
1856 will result in
1857
1858 [[ "bAr", "foo" ],
1859 [ "1", "2" ],
1860 [ "3", "4", "5" ]]
1861
1862 Callbacks
1863 Callbacks enable actions triggered from the inside of Text::CSV_XS.
1864
1865 While most of what this enables can easily be done in an unrolled
1866 loop as described in the "SYNOPSIS" callbacks can be used to meet
1867 special demands or enhance the "csv" function.
1868
1869 error
1870 $csv->callbacks (error => sub { $csv->SetDiag (0) });
1871
1872 the "error" callback is invoked when an error occurs, but only
1873 when "auto_diag" is set to a true value. A callback is invoked with
1874 the values returned by "error_diag":
1875
1876 my ($c, $s);
1877
1878 sub ignore3006
1879 {
1880 my ($err, $msg, $pos, $recno, $fldno) = @_;
1881 if ($err == 3006) {
1882 # ignore this error
1883 ($c, $s) = (undef, undef);
1884 Text::CSV_XS->SetDiag (0);
1885 }
1886 # Any other error
1887 return;
1888 } # ignore3006
1889
1890 $csv->callbacks (error => \&ignore3006);
1891 $csv->bind_columns (\$c, \$s);
1892 while ($csv->getline ($fh)) {
1893 # Error 3006 will not stop the loop
1894 }
1895
1896 after_parse
1897 $csv->callbacks (after_parse => sub { push @{$_[1]}, "NEW" });
1898 while (my $row = $csv->getline ($fh)) {
1899 $row->[-1] eq "NEW";
1900 }
1901
1902 This callback is invoked after parsing with "getline" only if no
1903 error occurred. The callback is invoked with two arguments: the
1904 current "CSV" parser object and an array reference to the fields
1905 parsed.
1906
1907 The return code of the callback is ignored unless it is a reference
1908 to the string "skip", in which case the record will be skipped in
1909 "getline_all".
1910
1911 sub add_from_db
1912 {
1913 my ($csv, $row) = @_;
1914 $sth->execute ($row->[4]);
1915 push @$row, $sth->fetchrow_array;
1916 } # add_from_db
1917
1918 my $aoa = csv (in => "file.csv", callbacks => {
1919 after_parse => \&add_from_db });
1920
1921 This hook can be used for validation:
1922
1923 FAIL
1924 Die if any of the records does not validate a rule:
1925
1926 after_parse => sub {
1927 $_[1][4] =~ m/^[0-9]{4}\s?[A-Z]{2}$/ or
1928 die "5th field does not have a valid Dutch zipcode";
1929 }
1930
1931 DEFAULT
1932 Replace invalid fields with a default value:
1933
1934 after_parse => sub { $_[1][2] =~ m/^\d+$/ or $_[1][2] = 0 }
1935
1936 SKIP
1937 Skip records that have invalid fields (only applies to
1938 "getline_all"):
1939
1940 after_parse => sub { $_[1][0] =~ m/^\d+$/ or return \"skip"; }
1941
1942 before_print
1943 my $idx = 1;
1944 $csv->callbacks (before_print => sub { $_[1][0] = $idx++ });
1945 $csv->print (*STDOUT, [ 0, $_ ]) for @members;
1946
1947 This callback is invoked before printing with "print" only if no
1948 error occurred. The callback is invoked with two arguments: the
1949 current "CSV" parser object and an array reference to the fields
1950 passed.
1951
1952 The return code of the callback is ignored.
1953
1954 sub max_4_fields
1955 {
1956 my ($csv, $row) = @_;
1957 @$row > 4 and splice @$row, 4;
1958 } # max_4_fields
1959
1960 csv (in => csv (in => "file.csv"), out => *STDOUT,
1961 callbacks => { before print => \&max_4_fields });
1962
1963 This callback is not active for "combine".
1964
1965 Callbacks for csv ()
1966
1967 The "csv" allows for some callbacks that do not integrate in XS
1968 internals but only feature the "csv" function.
1969
1970 csv (in => "file.csv",
1971 callbacks => {
1972 filter => { 6 => sub { $_ > 15 } }, # first
1973 after_parse => sub { say "AFTER PARSE"; }, # first
1974 after_in => sub { say "AFTER IN"; }, # second
1975 on_in => sub { say "ON IN"; }, # third
1976 },
1977 );
1978
1979 csv (in => $aoh,
1980 out => "file.csv",
1981 callbacks => {
1982 on_in => sub { say "ON IN"; }, # first
1983 before_out => sub { say "BEFORE OUT"; }, # second
1984 before_print => sub { say "BEFORE PRINT"; }, # third
1985 },
1986 );
1987
1988 filter
1989 This callback can be used to filter records. It is called just after
1990 a new record has been scanned. The callback accepts a:
1991
1992 hashref
1993 The keys are the index to the row (the field name or field number,
1994 1-based) and the values are subs to return a true or false value.
1995
1996 csv (in => "file.csv", filter => {
1997 3 => sub { m/a/ }, # third field should contain an "a"
1998 5 => sub { length > 4 }, # length of the 5th field minimal 5
1999 });
2000
2001 csv (in => "file.csv", filter => { foo => sub { $_ > 4 }});
2002
2003 If the keys to the filter hash contain any character that is not a
2004 digit it will also implicitly set "headers" to "auto" unless
2005 "headers" was already passed as argument. When headers are
2006 active, returning an array of hashes, the filter is not applicable
2007 to the header itself.
2008
2009 All sub results should match, as in AND.
2010
2011 The context of the callback sets $_ localized to the field
2012 indicated by the filter. The two arguments are as with all other
2013 callbacks, so the other fields in the current row can be seen:
2014
2015 filter => { 3 => sub { $_ > 100 ? $_[1][1] =~ m/A/ : $_[1][6] =~ m/B/ }}
2016
2017 If the context is set to return a list of hashes ("headers" is
2018 defined), the current record will also be available in the
2019 localized %_:
2020
2021 filter => { 3 => sub { $_ > 100 && $_{foo} =~ m/A/ && $_{bar} < 1000 }}
2022
2023 If the filter is used to alter the content by changing $_, make
2024 sure that the sub returns true in order not to have that record
2025 skipped:
2026
2027 filter => { 2 => sub { $_ = uc }}
2028
2029 will upper-case the second field, and then skip it if the resulting
2030 content evaluates to false. To always accept, end with truth:
2031
2032 filter => { 2 => sub { $_ = uc; 1 }}
2033
2034 coderef
2035 csv (in => "file.csv", filter => sub { $n++; 0; });
2036
2037 If the argument to "filter" is a coderef, it is an alias or
2038 shortcut to a filter on column 0:
2039
2040 csv (filter => sub { $n++; 0 });
2041
2042 is equal to
2043
2044 csv (filter => { 0 => sub { $n++; 0 });
2045
2046 filter-name
2047 csv (in => "file.csv", filter => "not_blank");
2048 csv (in => "file.csv", filter => "not_empty");
2049 csv (in => "file.csv", filter => "filled");
2050
2051 These are predefined filters
2052
2053 Given a file like (line numbers prefixed for doc purpose only):
2054
2055 1:1,2,3
2056 2:
2057 3:,
2058 4:""
2059 5:,,
2060 6:, ,
2061 7:"",
2062 8:" "
2063 9:4,5,6
2064
2065 not_blank
2066 Filter out the blank lines
2067
2068 This filter is a shortcut for
2069
2070 filter => { 0 => sub { @{$_[1]} > 1 or
2071 defined $_[1][0] && $_[1][0] ne "" } }
2072
2073 Due to the implementation, it is currently impossible to also
2074 filter lines that consists only of a quoted empty field. These
2075 lines are also considered blank lines.
2076
2077 With the given example, lines 2 and 4 will be skipped.
2078
2079 not_empty
2080 Filter out lines where all the fields are empty.
2081
2082 This filter is a shortcut for
2083
2084 filter => { 0 => sub { grep { defined && $_ ne "" } @{$_[1]} } }
2085
2086 A space is not regarded being empty, so given the example data,
2087 lines 2, 3, 4, 5, and 7 are skipped.
2088
2089 filled
2090 Filter out lines that have no visible data
2091
2092 This filter is a shortcut for
2093
2094 filter => { 0 => sub { grep { defined && m/\S/ } @{$_[1]} } }
2095
2096 This filter rejects all lines that not have at least one field
2097 that does not evaluate to the empty string.
2098
2099 With the given example data, this filter would skip lines 2
2100 through 8.
2101
2102 after_in
2103 This callback is invoked for each record after all records have been
2104 parsed but before returning the reference to the caller. The hook is
2105 invoked with two arguments: the current "CSV" parser object and a
2106 reference to the record. The reference can be a reference to a
2107 HASH or a reference to an ARRAY as determined by the arguments.
2108
2109 This callback can also be passed as an attribute without the
2110 "callbacks" wrapper.
2111
2112 before_out
2113 This callback is invoked for each record before the record is
2114 printed. The hook is invoked with two arguments: the current "CSV"
2115 parser object and a reference to the record. The reference can be a
2116 reference to a HASH or a reference to an ARRAY as determined by the
2117 arguments.
2118
2119 This callback can also be passed as an attribute without the
2120 "callbacks" wrapper.
2121
2122 This callback makes the row available in %_ if the row is a hashref.
2123 In this case %_ is writable and will change the original row.
2124
2125 on_in
2126 This callback acts exactly as the "after_in" or the "before_out"
2127 hooks.
2128
2129 This callback can also be passed as an attribute without the
2130 "callbacks" wrapper.
2131
2132 This callback makes the row available in %_ if the row is a hashref.
2133 In this case %_ is writable and will change the original row. So e.g.
2134 with
2135
2136 my $aoh = csv (
2137 in => \"foo\n1\n2\n",
2138 headers => "auto",
2139 on_in => sub { $_{bar} = 2; },
2140 );
2141
2142 $aoh will be:
2143
2144 [ { foo => 1,
2145 bar => 2,
2146 }
2147 { foo => 2,
2148 bar => 2,
2149 }
2150 ]
2151
2152 csv
2153 The function "csv" can also be called as a method or with an
2154 existing Text::CSV_XS object. This could help if the function is to
2155 be invoked a lot of times and the overhead of creating the object
2156 internally over and over again would be prevented by passing an
2157 existing instance.
2158
2159 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2160
2161 my $aoa = $csv->csv (in => $fh);
2162 my $aoa = csv (in => $fh, csv => $csv);
2163
2164 both act the same. Running this 20000 times on a 20 lines CSV file,
2165 showed a 53% speedup.
2166
2168 Combine (...)
2169 Parse (...)
2170
2171 The arguments to these internal functions are deliberately not
2172 described or documented in order to enable the module authors make
2173 changes it when they feel the need for it. Using them is highly
2174 discouraged as the API may change in future releases.
2175
2177 Reading a CSV file line by line:
2178 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2179 open my $fh, "<", "file.csv" or die "file.csv: $!";
2180 while (my $row = $csv->getline ($fh)) {
2181 # do something with @$row
2182 }
2183 close $fh or die "file.csv: $!";
2184
2185 or
2186
2187 my $aoa = csv (in => "file.csv", on_in => sub {
2188 # do something with %_
2189 });
2190
2191 Reading only a single column
2192
2193 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2194 open my $fh, "<", "file.csv" or die "file.csv: $!";
2195 # get only the 4th column
2196 my @column = map { $_->[3] } @{$csv->getline_all ($fh)};
2197 close $fh or die "file.csv: $!";
2198
2199 with "csv", you could do
2200
2201 my @column = map { $_->[0] }
2202 @{csv (in => "file.csv", fragment => "col=4")};
2203
2204 Parsing CSV strings:
2205 my $csv = Text::CSV_XS->new ({ keep_meta_info => 1, binary => 1 });
2206
2207 my $sample_input_string =
2208 qq{"I said, ""Hi!""",Yes,"",2.34,,"1.09","\x{20ac}",};
2209 if ($csv->parse ($sample_input_string)) {
2210 my @field = $csv->fields;
2211 foreach my $col (0 .. $#field) {
2212 my $quo = $csv->is_quoted ($col) ? $csv->{quote_char} : "";
2213 printf "%2d: %s%s%s\n", $col, $quo, $field[$col], $quo;
2214 }
2215 }
2216 else {
2217 print STDERR "parse () failed on argument: ",
2218 $csv->error_input, "\n";
2219 $csv->error_diag ();
2220 }
2221
2222 Parsing CSV from memory
2223
2224 Given a complete CSV data-set in scalar $data, generate a list of
2225 lists to represent the rows and fields
2226
2227 # The data
2228 my $data = join "\r\n" => map { join "," => 0 .. 5 } 0 .. 5;
2229
2230 # in a loop
2231 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2232 open my $fh, "<", \$data;
2233 my @foo;
2234 while (my $row = $csv->getline ($fh)) {
2235 push @foo, $row;
2236 }
2237 close $fh;
2238
2239 # a single call
2240 my $foo = csv (in => \$data);
2241
2242 Printing CSV data
2243 The fast way: using "print"
2244
2245 An example for creating "CSV" files using the "print" method:
2246
2247 my $csv = Text::CSV_XS->new ({ binary => 1, eol => $/ });
2248 open my $fh, ">", "foo.csv" or die "foo.csv: $!";
2249 for (1 .. 10) {
2250 $csv->print ($fh, [ $_, "$_" ]) or $csv->error_diag;
2251 }
2252 close $fh or die "$tbl.csv: $!";
2253
2254 The slow way: using "combine" and "string"
2255
2256 or using the slower "combine" and "string" methods:
2257
2258 my $csv = Text::CSV_XS->new;
2259
2260 open my $csv_fh, ">", "hello.csv" or die "hello.csv: $!";
2261
2262 my @sample_input_fields = (
2263 'You said, "Hello!"', 5.67,
2264 '"Surely"', '', '3.14159');
2265 if ($csv->combine (@sample_input_fields)) {
2266 print $csv_fh $csv->string, "\n";
2267 }
2268 else {
2269 print "combine () failed on argument: ",
2270 $csv->error_input, "\n";
2271 }
2272 close $csv_fh or die "hello.csv: $!";
2273
2274 Generating CSV into memory
2275
2276 Format a data-set (@foo) into a scalar value in memory ($data):
2277
2278 # The data
2279 my @foo = map { [ 0 .. 5 ] } 0 .. 3;
2280
2281 # in a loop
2282 my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1, eol => "\r\n" });
2283 open my $fh, ">", \my $data;
2284 $csv->print ($fh, $_) for @foo;
2285 close $fh;
2286
2287 # a single call
2288 csv (in => \@foo, out => \my $data);
2289
2290 Rewriting CSV
2291 Rewrite "CSV" files with ";" as separator character to well-formed
2292 "CSV":
2293
2294 use Text::CSV_XS qw( csv );
2295 csv (in => csv (in => "bad.csv", sep_char => ";"), out => *STDOUT);
2296
2297 As "STDOUT" is now default in "csv", a one-liner converting a UTF-16
2298 CSV file with BOM and TAB-separation to valid UTF-8 CSV could be:
2299
2300 $ perl -C3 -MText::CSV_XS=csv -we\
2301 'csv(in=>"utf16tab.csv",encoding=>"utf16",sep=>"\t")' >utf8.csv
2302
2303 Dumping database tables to CSV
2304 Dumping a database table can be simple as this (TIMTOWTDI):
2305
2306 my $dbh = DBI->connect (...);
2307 my $sql = "select * from foo";
2308
2309 # using your own loop
2310 open my $fh, ">", "foo.csv" or die "foo.csv: $!\n";
2311 my $csv = Text::CSV_XS->new ({ binary => 1, eol => "\r\n" });
2312 my $sth = $dbh->prepare ($sql); $sth->execute;
2313 $csv->print ($fh, $sth->{NAME_lc});
2314 while (my $row = $sth->fetch) {
2315 $csv->print ($fh, $row);
2316 }
2317
2318 # using the csv function, all in memory
2319 csv (out => "foo.csv", in => $dbh->selectall_arrayref ($sql));
2320
2321 # using the csv function, streaming with callbacks
2322 my $sth = $dbh->prepare ($sql); $sth->execute;
2323 csv (out => "foo.csv", in => sub { $sth->fetch });
2324 csv (out => "foo.csv", in => sub { $sth->fetchrow_hashref });
2325
2326 Note that this does not discriminate between "empty" values and NULL-
2327 values from the database, as both will be the same empty field in CSV.
2328 To enable distinction between the two, use "quote_empty".
2329
2330 csv (out => "foo.csv", in => sub { $sth->fetch }, quote_empty => 1);
2331
2332 If the database import utility supports special sequences to insert
2333 "NULL" values into the database, like MySQL/MariaDB supports "\N",
2334 use a filter or a map
2335
2336 csv (out => "foo.csv", in => sub { $sth->fetch },
2337 on_in => sub { $_ //= "\\N" for @{$_[1]} });
2338
2339 while (my $row = $sth->fetch) {
2340 $csv->print ($fh, [ map { $_ // "\\N" } @$row ]);
2341 }
2342
2343 note that this will not work as expected when choosing the backslash
2344 ("\") as "escape_char", as that will cause the "\" to need to be
2345 escaped by yet another "\", which will cause the field to need
2346 quotation and thus ending up as "\\N" instead of "\N". See also
2347 "undef_str".
2348
2349 these special sequences are not recognized by Text::CSV_XS on parsing
2350 the CSV generated like this, but map and filter are your friends again
2351
2352 while (my $row = $csv->getline ($fh)) {
2353 $sth->execute (map { $_ eq "\\N" ? undef : $_ } @$row);
2354 }
2355
2356 csv (in => "foo.csv", filter => { 1 => sub {
2357 $sth->execute (map { $_ eq "\\N" ? undef : $_ } @{$_[1]}); 0; }});
2358
2359 The examples folder
2360 For more extended examples, see the examples/ 1. sub-directory in the
2361 original distribution or the git repository 2.
2362
2363 1. https://github.com/Tux/Text-CSV_XS/tree/master/examples
2364 2. https://github.com/Tux/Text-CSV_XS
2365
2366 The following files can be found there:
2367
2368 parser-xs.pl
2369 This can be used as a boilerplate to parse invalid "CSV" and parse
2370 beyond (expected) errors alternative to using the "error" callback.
2371
2372 $ perl examples/parser-xs.pl bad.csv >good.csv
2373
2374 csv-check
2375 This is a command-line tool that uses parser-xs.pl techniques to
2376 check the "CSV" file and report on its content.
2377
2378 $ csv-check files/utf8.csv
2379 Checked files/utf8.csv with csv-check 1.9
2380 using Text::CSV_XS 1.32 with perl 5.26.0 and Unicode 9.0.0
2381 OK: rows: 1, columns: 2
2382 sep = <,>, quo = <">, bin = <1>, eol = <"\n">
2383
2384 csv2xls
2385 A script to convert "CSV" to Microsoft Excel ("XLS"). This requires
2386 extra modules Date::Calc and Spreadsheet::WriteExcel. The converter
2387 accepts various options and can produce UTF-8 compliant Excel files.
2388
2389 csv2xlsx
2390 A script to convert "CSV" to Microsoft Excel ("XLSX"). This requires
2391 the modules Date::Calc and Spreadsheet::Writer::XLSX. The converter
2392 does accept various options including merging several "CSV" files
2393 into a single Excel file.
2394
2395 csvdiff
2396 A script that provides colorized diff on sorted CSV files, assuming
2397 first line is header and first field is the key. Output options
2398 include colorized ANSI escape codes or HTML.
2399
2400 $ csvdiff --html --output=diff.html file1.csv file2.csv
2401
2402 rewrite.pl
2403 A script to rewrite (in)valid CSV into valid CSV files. Script has
2404 options to generate confusing CSV files or CSV files that conform to
2405 Dutch MS-Excel exports (using ";" as separation).
2406
2407 Script - by default - honors BOM and auto-detects separation
2408 converting it to default standard CSV with "," as separator.
2409
2411 Text::CSV_XS is not designed to detect the characters used to quote
2412 and separate fields. The parsing is done using predefined (default)
2413 settings. In the examples sub-directory, you can find scripts that
2414 demonstrate how you could try to detect these characters yourself.
2415
2416 Microsoft Excel
2417 The import/export from Microsoft Excel is a risky task, according to
2418 the documentation in "Text::CSV::Separator". Microsoft uses the
2419 system's list separator defined in the regional settings, which happens
2420 to be a semicolon for Dutch, German and Spanish (and probably some
2421 others as well). For the English locale, the default is a comma.
2422 In Windows however, the user is free to choose a predefined locale,
2423 and then change every individual setting in it, so checking the
2424 locale is no solution.
2425
2426 As of version 1.17, a lone first line with just
2427
2428 sep=;
2429
2430 will be recognized and honored when parsing with "getline".
2431
2433 More Errors & Warnings
2434 New extensions ought to be clear and concise in reporting what
2435 error has occurred where and why, and maybe also offer a remedy to
2436 the problem.
2437
2438 "error_diag" is a (very) good start, but there is more work to be
2439 done in this area.
2440
2441 Basic calls should croak or warn on illegal parameters. Errors
2442 should be documented.
2443
2444 setting meta info
2445 Future extensions might include extending the "meta_info",
2446 "is_quoted", and "is_binary" to accept setting these flags for
2447 fields, so you can specify which fields are quoted in the
2448 "combine"/"string" combination.
2449
2450 $csv->meta_info (0, 1, 1, 3, 0, 0);
2451 $csv->is_quoted (3, 1);
2452
2453 Metadata Vocabulary for Tabular Data
2454 <http://w3c.github.io/csvw/metadata/> (a W3C editor's draft) could be
2455 an example for supporting more metadata.
2456
2457 Parse the whole file at once
2458 Implement new methods or functions that enable parsing of a
2459 complete file at once, returning a list of hashes. Possible extension
2460 to this could be to enable a column selection on the call:
2461
2462 my @AoH = $csv->parse_file ($filename, { cols => [ 1, 4..8, 12 ]});
2463
2464 Returning something like
2465
2466 [ { fields => [ 1, 2, "foo", 4.5, undef, "", 8 ],
2467 flags => [ ... ],
2468 },
2469 { fields => [ ... ],
2470 .
2471 },
2472 ]
2473
2474 Note that the "csv" function already supports most of this, but does
2475 not return flags. "getline_all" returns all rows for an open stream,
2476 but this will not return flags either. "fragment" can reduce the
2477 required rows or columns, but cannot combine them.
2478
2479 Cookbook
2480 Write a document that has recipes for most known non-standard (and
2481 maybe some standard) "CSV" formats, including formats that use
2482 "TAB", ";", "|", or other non-comma separators.
2483
2484 Examples could be taken from W3C's CSV on the Web: Use Cases and
2485 Requirements <http://w3c.github.io/csvw/use-cases-and-
2486 requirements/index.html>
2487
2488 Steal
2489 Steal good new ideas and features from PapaParse
2490 <http://papaparse.com> or csvkit <http://csvkit.readthedocs.org>.
2491
2492 Perl6 support
2493 I'm already working on perl6 support here
2494 <https://github.com/Tux/CSV>. No promises yet on when it is finished
2495 (or fast). Trying to keep the API alike as much as possible.
2496
2497 NOT TODO
2498 combined methods
2499 Requests for adding means (methods) that combine "combine" and
2500 "string" in a single call will not be honored (use "print" instead).
2501 Likewise for "parse" and "fields" (use "getline" instead), given the
2502 problems with embedded newlines.
2503
2504 Release plan
2505 No guarantees, but this is what I had in mind some time ago:
2506
2507 · DIAGNOSTICS section in pod to *describe* the errors (see below)
2508
2510 The current hard-coding of characters and character ranges makes this
2511 code unusable on "EBCDIC" systems. Recent work in perl-5.20 might
2512 change that.
2513
2514 Opening "EBCDIC" encoded files on "ASCII"+ systems is likely to
2515 succeed using Encode's "cp37", "cp1047", or "posix-bc":
2516
2517 open my $fh, "<:encoding(cp1047)", "ebcdic_file.csv" or die "...";
2518
2520 Still under construction ...
2521
2522 If an error occurs, "$csv->error_diag" can be used to get information
2523 on the cause of the failure. Note that for speed reasons the internal
2524 value is never cleared on success, so using the value returned by
2525 "error_diag" in normal cases - when no error occurred - may cause
2526 unexpected results.
2527
2528 If the constructor failed, the cause can be found using "error_diag" as
2529 a class method, like "Text::CSV_XS->error_diag".
2530
2531 The "$csv->error_diag" method is automatically invoked upon error when
2532 the contractor was called with "auto_diag" set to 1 or 2, or when
2533 autodie is in effect. When set to 1, this will cause a "warn" with the
2534 error message, when set to 2, it will "die". "2012 - EOF" is excluded
2535 from "auto_diag" reports.
2536
2537 Errors can be (individually) caught using the "error" callback.
2538
2539 The errors as described below are available. I have tried to make the
2540 error itself explanatory enough, but more descriptions will be added.
2541 For most of these errors, the first three capitals describe the error
2542 category:
2543
2544 · INI
2545
2546 Initialization error or option conflict.
2547
2548 · ECR
2549
2550 Carriage-Return related parse error.
2551
2552 · EOF
2553
2554 End-Of-File related parse error.
2555
2556 · EIQ
2557
2558 Parse error inside quotation.
2559
2560 · EIF
2561
2562 Parse error inside field.
2563
2564 · ECB
2565
2566 Combine error.
2567
2568 · EHR
2569
2570 HashRef parse related error.
2571
2572 And below should be the complete list of error codes that can be
2573 returned:
2574
2575 · 1001 "INI - sep_char is equal to quote_char or escape_char"
2576
2577 The separation character cannot be equal to the quotation
2578 character or to the escape character, as this would invalidate all
2579 parsing rules.
2580
2581 · 1002 "INI - allow_whitespace with escape_char or quote_char SP or
2582 TAB"
2583
2584 Using the "allow_whitespace" attribute when either "quote_char" or
2585 "escape_char" is equal to "SPACE" or "TAB" is too ambiguous to
2586 allow.
2587
2588 · 1003 "INI - \r or \n in main attr not allowed"
2589
2590 Using default "eol" characters in either "sep_char", "quote_char",
2591 or "escape_char" is not allowed.
2592
2593 · 1004 "INI - callbacks should be undef or a hashref"
2594
2595 The "callbacks" attribute only allows one to be "undef" or a hash
2596 reference.
2597
2598 · 1005 "INI - EOL too long"
2599
2600 The value passed for EOL is exceeding its maximum length (16).
2601
2602 · 1006 "INI - SEP too long"
2603
2604 The value passed for SEP is exceeding its maximum length (16).
2605
2606 · 1007 "INI - QUOTE too long"
2607
2608 The value passed for QUOTE is exceeding its maximum length (16).
2609
2610 · 1008 "INI - SEP undefined"
2611
2612 The value passed for SEP should be defined and not empty.
2613
2614 · 1010 "INI - the header is empty"
2615
2616 The header line parsed in the "header" is empty.
2617
2618 · 1011 "INI - the header contains more than one valid separator"
2619
2620 The header line parsed in the "header" contains more than one
2621 (unique) separator character out of the allowed set of separators.
2622
2623 · 1012 "INI - the header contains an empty field"
2624
2625 The header line parsed in the "header" is contains an empty field.
2626
2627 · 1013 "INI - the header contains nun-unique fields"
2628
2629 The header line parsed in the "header" contains at least two
2630 identical fields.
2631
2632 · 1014 "INI - header called on undefined stream"
2633
2634 The header line cannot be parsed from an undefined sources.
2635
2636 · 1500 "PRM - Invalid/unsupported argument(s)"
2637
2638 Function or method called with invalid argument(s) or parameter(s).
2639
2640 · 1501 "PRM - The key attribute is passed as an unsupported type"
2641
2642 The "key" attribute is of an unsupported type.
2643
2644 · 2010 "ECR - QUO char inside quotes followed by CR not part of EOL"
2645
2646 When "eol" has been set to anything but the default, like
2647 "\r\t\n", and the "\r" is following the second (closing)
2648 "quote_char", where the characters following the "\r" do not make up
2649 the "eol" sequence, this is an error.
2650
2651 · 2011 "ECR - Characters after end of quoted field"
2652
2653 Sequences like "1,foo,"bar"baz,22,1" are not allowed. "bar" is a
2654 quoted field and after the closing double-quote, there should be
2655 either a new-line sequence or a separation character.
2656
2657 · 2012 "EOF - End of data in parsing input stream"
2658
2659 Self-explaining. End-of-file while inside parsing a stream. Can
2660 happen only when reading from streams with "getline", as using
2661 "parse" is done on strings that are not required to have a trailing
2662 "eol".
2663
2664 · 2013 "INI - Specification error for fragments RFC7111"
2665
2666 Invalid specification for URI "fragment" specification.
2667
2668 · 2014 "ENF - Inconsistent number of fields"
2669
2670 Inconsistent number of fields under strict parsing.
2671
2672 · 2021 "EIQ - NL char inside quotes, binary off"
2673
2674 Sequences like "1,"foo\nbar",22,1" are allowed only when the binary
2675 option has been selected with the constructor.
2676
2677 · 2022 "EIQ - CR char inside quotes, binary off"
2678
2679 Sequences like "1,"foo\rbar",22,1" are allowed only when the binary
2680 option has been selected with the constructor.
2681
2682 · 2023 "EIQ - QUO character not allowed"
2683
2684 Sequences like ""foo "bar" baz",qu" and "2023,",2008-04-05,"Foo,
2685 Bar",\n" will cause this error.
2686
2687 · 2024 "EIQ - EOF cannot be escaped, not even inside quotes"
2688
2689 The escape character is not allowed as last character in an input
2690 stream.
2691
2692 · 2025 "EIQ - Loose unescaped escape"
2693
2694 An escape character should escape only characters that need escaping.
2695
2696 Allowing the escape for other characters is possible with the
2697 attribute "allow_loose_escape".
2698
2699 · 2026 "EIQ - Binary character inside quoted field, binary off"
2700
2701 Binary characters are not allowed by default. Exceptions are
2702 fields that contain valid UTF-8, that will automatically be upgraded
2703 if the content is valid UTF-8. Set "binary" to 1 to accept binary
2704 data.
2705
2706 · 2027 "EIQ - Quoted field not terminated"
2707
2708 When parsing a field that started with a quotation character, the
2709 field is expected to be closed with a quotation character. When the
2710 parsed line is exhausted before the quote is found, that field is not
2711 terminated.
2712
2713 · 2030 "EIF - NL char inside unquoted verbatim, binary off"
2714
2715 · 2031 "EIF - CR char is first char of field, not part of EOL"
2716
2717 · 2032 "EIF - CR char inside unquoted, not part of EOL"
2718
2719 · 2034 "EIF - Loose unescaped quote"
2720
2721 · 2035 "EIF - Escaped EOF in unquoted field"
2722
2723 · 2036 "EIF - ESC error"
2724
2725 · 2037 "EIF - Binary character in unquoted field, binary off"
2726
2727 · 2110 "ECB - Binary character in Combine, binary off"
2728
2729 · 2200 "EIO - print to IO failed. See errno"
2730
2731 · 3001 "EHR - Unsupported syntax for column_names ()"
2732
2733 · 3002 "EHR - getline_hr () called before column_names ()"
2734
2735 · 3003 "EHR - bind_columns () and column_names () fields count
2736 mismatch"
2737
2738 · 3004 "EHR - bind_columns () only accepts refs to scalars"
2739
2740 · 3006 "EHR - bind_columns () did not pass enough refs for parsed
2741 fields"
2742
2743 · 3007 "EHR - bind_columns needs refs to writable scalars"
2744
2745 · 3008 "EHR - unexpected error in bound fields"
2746
2747 · 3009 "EHR - print_hr () called before column_names ()"
2748
2749 · 3010 "EHR - print_hr () called with invalid arguments"
2750
2752 IO::File, IO::Handle, IO::Wrap, Text::CSV, Text::CSV_PP,
2753 Text::CSV::Encoded, Text::CSV::Separator, Text::CSV::Slurp,
2754 Spreadsheet::CSV and Spreadsheet::Read, and of course perl.
2755
2756 If you are using perl6, you can have a look at "Text::CSV" in the
2757 perl6 ecosystem, offering the same features.
2758
2759 non-perl
2760
2761 A CSV parser in JavaScript, also used by W3C <http://www.w3.org>, is
2762 the multi-threaded in-browser PapaParse <http://papaparse.com/>.
2763
2764 csvkit <http://csvkit.readthedocs.org> is a python CSV parsing toolkit.
2765
2767 Alan Citterman <alan@mfgrtl.com> wrote the original Perl module.
2768 Please don't send mail concerning Text::CSV_XS to Alan, who is not
2769 involved in the C/XS part that is now the main part of the module.
2770
2771 Jochen Wiedmann <joe@ispsoft.de> rewrote the en- and decoding in C by
2772 implementing a simple finite-state machine. He added variable quote,
2773 escape and separator characters, the binary mode and the print and
2774 getline methods. See ChangeLog releases 0.10 through 0.23.
2775
2776 H.Merijn Brand <h.m.brand@xs4all.nl> cleaned up the code, added the
2777 field flags methods, wrote the major part of the test suite, completed
2778 the documentation, fixed most RT bugs, added all the allow flags and
2779 the "csv" function. See ChangeLog releases 0.25 and on.
2780
2782 Copyright (C) 2007-2018 H.Merijn Brand. All rights reserved.
2783 Copyright (C) 1998-2001 Jochen Wiedmann. All rights reserved.
2784 Copyright (C) 1997 Alan Citterman. All rights reserved.
2785
2786 This library is free software; you can redistribute and/or modify it
2787 under the same terms as Perl itself.
2788
2789
2790
2791perl v5.28.0 2018-09-13 CSV_XS(3)