1CSV_XS(3)             User Contributed Perl Documentation            CSV_XS(3)
2
3
4

NAME

6       Text::CSV_XS - comma-separated values manipulation routines
7

SYNOPSIS

9        # Functional interface
10        use Text::CSV_XS qw( csv );
11
12        # Read whole file in memory
13        my $aoa = csv (in => "data.csv");    # as array of array
14        my $aoh = csv (in => "data.csv",
15                       headers => "auto");   # as array of hash
16
17        # Write array of arrays as csv file
18        csv (in => $aoa, out => "file.csv", sep_char=> ";");
19
20        # Only show lines where "code" is odd
21        csv (in => "data.csv", filter => { code => sub { $_ % 2 }});
22
23
24        # Object interface
25        use Text::CSV_XS;
26
27        my @rows;
28        # Read/parse CSV
29        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
30        open my $fh, "<:encoding(utf8)", "test.csv" or die "test.csv: $!";
31        while (my $row = $csv->getline ($fh)) {
32            $row->[2] =~ m/pattern/ or next; # 3rd field should match
33            push @rows, $row;
34            }
35        close $fh;
36
37        # and write as CSV
38        open $fh, ">:encoding(utf8)", "new.csv" or die "new.csv: $!";
39        $csv->say ($fh, $_) for @rows;
40        close $fh or die "new.csv: $!";
41

DESCRIPTION

43       Text::CSV_XS  provides facilities for the composition  and
44       decomposition of comma-separated values.  An instance of the
45       Text::CSV_XS class will combine fields into a "CSV" string and parse a
46       "CSV" string into fields.
47
48       The module accepts either strings or files as input  and support the
49       use of user-specified characters for delimiters, separators, and
50       escapes.
51
52   Embedded newlines
53       Important Note:  The default behavior is to accept only ASCII
54       characters in the range from 0x20 (space) to 0x7E (tilde).   This means
55       that the fields can not contain newlines. If your data contains
56       newlines embedded in fields, or characters above 0x7E (tilde), or
57       binary data, you must set "binary => 1" in the call to "new". To cover
58       the widest range of parsing options, you will always want to set
59       binary.
60
61       But you still have the problem  that you have to pass a correct line to
62       the "parse" method, which is more complicated from the usual point of
63       usage:
64
65        my $csv = Text::CSV_XS->new ({ binary => 1, eol => $/ });
66        while (<>) {           #  WRONG!
67            $csv->parse ($_);
68            my @fields = $csv->fields ();
69            }
70
71       this will break, as the "while" might read broken lines:  it does not
72       care about the quoting. If you need to support embedded newlines,  the
73       way to go is to  not  pass "eol" in the parser  (it accepts "\n", "\r",
74       and "\r\n" by default) and then
75
76        my $csv = Text::CSV_XS->new ({ binary => 1 });
77        open my $fh, "<", $file or die "$file: $!";
78        while (my $row = $csv->getline ($fh)) {
79            my @fields = @$row;
80            }
81
82       The old(er) way of using global file handles is still supported
83
84        while (my $row = $csv->getline (*ARGV)) { ... }
85
86   Unicode
87       Unicode is only tested to work with perl-5.8.2 and up.
88
89       See also "BOM".
90
91       The simplest way to ensure the correct encoding is used for  in- and
92       output is by either setting layers on the filehandles, or setting the
93       "encoding" argument for "csv".
94
95        open my $fh, "<:encoding(UTF-8)", "in.csv"  or die "in.csv: $!";
96       or
97        my $aoa = csv (in => "in.csv",     encoding => "UTF-8");
98
99        open my $fh, ">:encoding(UTF-8)", "out.csv" or die "out.csv: $!";
100       or
101        csv (in => $aoa, out => "out.csv", encoding => "UTF-8");
102
103       On parsing (both for  "getline" and  "parse"),  if the source is marked
104       being UTF8, then all fields that are marked binary will also be marked
105       UTF8.
106
107       On combining ("print"  and  "combine"):  if any of the combining fields
108       was marked UTF8, the resulting string will be marked as UTF8.  Note
109       however that all fields  before  the first field marked UTF8 and
110       contained 8-bit characters that were not upgraded to UTF8,  these will
111       be  "bytes"  in the resulting string too, possibly causing unexpected
112       errors.  If you pass data of different encoding,  or you don't know if
113       there is  different  encoding, force it to be upgraded before you pass
114       them on:
115
116        $csv->print ($fh, [ map { utf8::upgrade (my $x = $_); $x } @data ]);
117
118       For complete control over encoding, please use Text::CSV::Encoded:
119
120        use Text::CSV::Encoded;
121        my $csv = Text::CSV::Encoded->new ({
122            encoding_in  => "iso-8859-1", # the encoding comes into   Perl
123            encoding_out => "cp1252",     # the encoding comes out of Perl
124            });
125
126        $csv = Text::CSV::Encoded->new ({ encoding  => "utf8" });
127        # combine () and print () accept *literally* utf8 encoded data
128        # parse () and getline () return *literally* utf8 encoded data
129
130        $csv = Text::CSV::Encoded->new ({ encoding  => undef }); # default
131        # combine () and print () accept UTF8 marked data
132        # parse () and getline () return UTF8 marked data
133
134   BOM
135       BOM  (or Byte Order Mark)  handling is available only inside the
136       "header" method.   This method supports the following encodings:
137       "utf-8", "utf-1", "utf-32be", "utf-32le", "utf-16be", "utf-16le",
138       "utf-ebcdic", "scsu", "bocu-1", and "gb-18030". See Wikipedia
139       <https://en.wikipedia.org/wiki/Byte_order_mark>.
140
141       If a file has a BOM, the easiest way to deal with that is
142
143        my $aoh = csv (in => $file, detect_bom => 1);
144
145       All records will be encoded based on the detected BOM.
146
147       This implies a call to the  "header"  method,  which defaults to also
148       set the "column_names". So this is not the same as
149
150        my $aoh = csv (in => $file, headers => "auto");
151
152       which only reads the first record to set  "column_names"  but ignores
153       any meaning of possible present BOM.
154

SPECIFICATION

156       While no formal specification for CSV exists, RFC 4180
157       <https://datatracker.ietf.org/doc/html/rfc4180> (1) describes the
158       common format and establishes  "text/csv" as the MIME type registered
159       with the IANA. RFC 7111 <https://datatracker.ietf.org/doc/html/rfc7111>
160       (2) adds fragments to CSV.
161
162       Many informal documents exist that describe the "CSV" format.   "How
163       To: The Comma Separated Value (CSV) File Format"
164       <http://creativyst.com/Doc/Articles/CSV/CSV01.shtml> (3)  provides an
165       overview of the  "CSV"  format in the most widely used applications and
166       explains how it can best be used and supported.
167
168        1) https://datatracker.ietf.org/doc/html/rfc4180
169        2) https://datatracker.ietf.org/doc/html/rfc7111
170        3) http://creativyst.com/Doc/Articles/CSV/CSV01.shtml
171
172       The basic rules are as follows:
173
174       CSV  is a delimited data format that has fields/columns separated by
175       the comma character and records/rows separated by newlines. Fields that
176       contain a special character (comma, newline, or double quote),  must be
177       enclosed in double quotes. However, if a line contains a single entry
178       that is the empty string, it may be enclosed in double quotes.  If a
179       field's value contains a double quote character it is escaped by
180       placing another double quote character next to it. The "CSV" file
181       format does not require a specific character encoding, byte order, or
182       line terminator format.
183
184       • Each record is a single line ended by a line feed  (ASCII/"LF"=0x0A)
185         or a carriage return and line feed pair (ASCII/"CRLF"="0x0D 0x0A"),
186         however, line-breaks may be embedded.
187
188       • Fields are separated by commas.
189
190       • Allowable characters within a "CSV" field include 0x09 ("TAB") and
191         the inclusive range of 0x20 (space) through 0x7E (tilde).  In binary
192         mode all characters are accepted, at least in quoted fields.
193
194       • A field within  "CSV"  must be surrounded by  double-quotes to
195         contain  a separator character (comma).
196
197       Though this is the most clear and restrictive definition,  Text::CSV_XS
198       is way more liberal than this, and allows extension:
199
200       • Line termination by a single carriage return is accepted by default
201
202       • The separation-, escape-, and escape- characters can be any ASCII
203         character in the range from  0x20 (space) to  0x7E (tilde).
204         Characters outside this range may or may not work as expected.
205         Multibyte characters, like UTF "U+060C" (ARABIC COMMA),   "U+FF0C"
206         (FULLWIDTH COMMA),  "U+241B" (SYMBOL FOR ESCAPE), "U+2424" (SYMBOL
207         FOR NEWLINE), "U+FF02" (FULLWIDTH QUOTATION MARK), and "U+201C" (LEFT
208         DOUBLE QUOTATION MARK) (to give some examples of what might look
209         promising) work for newer versions of perl for "sep_char", and
210         "quote_char" but not for "escape_char".
211
212         If you use perl-5.8.2 or higher these three attributes are
213         utf8-decoded, to increase the likelihood of success. This way
214         "U+00FE" will be allowed as a quote character.
215
216       • A field in  "CSV"  must be surrounded by double-quotes to make an
217         embedded double-quote, represented by a pair of consecutive double-
218         quotes, valid. In binary mode you may additionally use the sequence
219         ""0" for representation of a NULL byte. Using 0x00 in binary mode is
220         just as valid.
221
222       • Several violations of the above specification may be lifted by
223         passing some options as attributes to the object constructor.
224

METHODS

226   version
227       (Class method) Returns the current module version.
228
229   new
230       (Class method) Returns a new instance of class Text::CSV_XS. The
231       attributes are described by the (optional) hash ref "\%attr".
232
233        my $csv = Text::CSV_XS->new ({ attributes ... });
234
235       The following attributes are available:
236
237       eol
238
239        my $csv = Text::CSV_XS->new ({ eol => $/ });
240                  $csv->eol (undef);
241        my $eol = $csv->eol;
242
243       The end-of-line string to add to rows for "print" or the record
244       separator for "getline".
245
246       When not passed in a parser instance,  the default behavior is to
247       accept "\n", "\r", and "\r\n", so it is probably safer to not specify
248       "eol" at all. Passing "undef" or the empty string behave the same.
249
250       When not passed in a generating instance,  records are not terminated
251       at all, so it is probably wise to pass something you expect. A safe
252       choice for "eol" on output is either $/ or "\r\n".
253
254       Common values for "eol" are "\012" ("\n" or Line Feed),  "\015\012"
255       ("\r\n" or Carriage Return, Line Feed),  and "\015"  ("\r" or Carriage
256       Return). The "eol" attribute cannot exceed 7 (ASCII) characters.
257
258       If both $/ and "eol" equal "\015", parsing lines that end on only a
259       Carriage Return without Line Feed, will be "parse"d correct.
260
261       sep_char
262
263        my $csv = Text::CSV_XS->new ({ sep_char => ";" });
264                $csv->sep_char (";");
265        my $c = $csv->sep_char;
266
267       The char used to separate fields, by default a comma. (",").  Limited
268       to a single-byte character, usually in the range from 0x20 (space) to
269       0x7E (tilde). When longer sequences are required, use "sep".
270
271       The separation character can not be equal to the quote character  or to
272       the escape character.
273
274       See also "CAVEATS"
275
276       sep
277
278        my $csv = Text::CSV_XS->new ({ sep => "\N{FULLWIDTH COMMA}" });
279                  $csv->sep (";");
280        my $sep = $csv->sep;
281
282       The chars used to separate fields, by default undefined. Limited to 8
283       bytes.
284
285       When set, overrules "sep_char".  If its length is one byte it acts as
286       an alias to "sep_char".
287
288       See also "CAVEATS"
289
290       quote_char
291
292        my $csv = Text::CSV_XS->new ({ quote_char => "'" });
293                $csv->quote_char (undef);
294        my $c = $csv->quote_char;
295
296       The character to quote fields containing blanks or binary data,  by
297       default the double quote character (""").  A value of undef suppresses
298       quote chars (for simple cases only). Limited to a single-byte
299       character, usually in the range from  0x20 (space) to  0x7E (tilde).
300       When longer sequences are required, use "quote".
301
302       "quote_char" can not be equal to "sep_char".
303
304       quote
305
306        my $csv = Text::CSV_XS->new ({ quote => "\N{FULLWIDTH QUOTATION MARK}" });
307                    $csv->quote ("'");
308        my $quote = $csv->quote;
309
310       The chars used to quote fields, by default undefined. Limited to 8
311       bytes.
312
313       When set, overrules "quote_char". If its length is one byte it acts as
314       an alias to "quote_char".
315
316       This method does not support "undef".  Use "quote_char" to disable
317       quotation.
318
319       See also "CAVEATS"
320
321       escape_char
322
323        my $csv = Text::CSV_XS->new ({ escape_char => "\\" });
324                $csv->escape_char (":");
325        my $c = $csv->escape_char;
326
327       The character to  escape  certain characters inside quoted fields.
328       This is limited to a  single-byte  character,  usually  in the  range
329       from  0x20 (space) to 0x7E (tilde).
330
331       The "escape_char" defaults to being the double-quote mark ("""). In
332       other words the same as the default "quote_char". This means that
333       doubling the quote mark in a field escapes it:
334
335        "foo","bar","Escape ""quote mark"" with two ""quote marks""","baz"
336
337       If  you  change  the   "quote_char"  without  changing  the
338       "escape_char",  the  "escape_char" will still be the double-quote
339       (""").  If instead you want to escape the  "quote_char" by doubling it
340       you will need to also change the  "escape_char"  to be the same as what
341       you have changed the "quote_char" to.
342
343       Setting "escape_char" to <undef> or "" will disable escaping completely
344       and is greatly discouraged. This will also disable "escape_null".
345
346       The escape character can not be equal to the separation character.
347
348       binary
349
350        my $csv = Text::CSV_XS->new ({ binary => 1 });
351                $csv->binary (0);
352        my $f = $csv->binary;
353
354       If this attribute is 1,  you may use binary characters in quoted
355       fields, including line feeds, carriage returns and "NULL" bytes. (The
356       latter could be escaped as ""0".) By default this feature is off.
357
358       If a string is marked UTF8,  "binary" will be turned on automatically
359       when binary characters other than "CR" and "NL" are encountered.   Note
360       that a simple string like "\x{00a0}" might still be binary, but not
361       marked UTF8, so setting "{ binary => 1 }" is still a wise option.
362
363       strict
364
365        my $csv = Text::CSV_XS->new ({ strict => 1 });
366                $csv->strict (0);
367        my $f = $csv->strict;
368
369       If this attribute is set to 1, any row that parses to a different
370       number of fields than the previous row will cause the parser to throw
371       error 2014.
372
373       skip_empty_rows
374
375        my $csv = Text::CSV_XS->new ({ skip_empty_rows => 1 });
376                $csv->skip_empty_rows (0);
377        my $f = $csv->skip_empty_rows;
378
379       If this attribute is set to 1,  any row that has an  "eol" immediately
380       following the start of line will be skipped.  Default behavior is to
381       return one single empty field.
382
383       This attribute is only used in parsing.
384
385       formula_handling
386
387       formula
388
389        my $csv = Text::CSV_XS->new ({ formula => "none" });
390                $csv->formula ("none");
391        my $f = $csv->formula;
392
393       This defines the behavior of fields containing formulas. As formulas
394       are considered dangerous in spreadsheets, this attribute can define an
395       optional action to be taken if a field starts with an equal sign ("=").
396
397       For purpose of code-readability, this can also be written as
398
399        my $csv = Text::CSV_XS->new ({ formula_handling => "none" });
400                $csv->formula_handling ("none");
401        my $f = $csv->formula_handling;
402
403       Possible values for this attribute are
404
405       none
406         Take no specific action. This is the default.
407
408          $csv->formula ("none");
409
410       die
411         Cause the process to "die" whenever a leading "=" is encountered.
412
413          $csv->formula ("die");
414
415       croak
416         Cause the process to "croak" whenever a leading "=" is encountered.
417         (See Carp)
418
419          $csv->formula ("croak");
420
421       diag
422         Report position and content of the field whenever a leading  "=" is
423         found.  The value of the field is unchanged.
424
425          $csv->formula ("diag");
426
427       empty
428         Replace the content of fields that start with a "=" with the empty
429         string.
430
431          $csv->formula ("empty");
432          $csv->formula ("");
433
434       undef
435         Replace the content of fields that start with a "=" with "undef".
436
437          $csv->formula ("undef");
438          $csv->formula (undef);
439
440       a callback
441         Modify the content of fields that start with a  "="  with the return-
442         value of the callback.  The original content of the field is
443         available inside the callback as $_;
444
445          # Replace all formula's with 42
446          $csv->formula (sub { 42; });
447
448          # same as $csv->formula ("empty") but slower
449          $csv->formula (sub { "" });
450
451          # Allow =4+12
452          $csv->formula (sub { s/^=(\d+\+\d+)$/$1/eer });
453
454          # Allow more complex calculations
455          $csv->formula (sub { eval { s{^=([-+*/0-9()]+)$}{$1}ee }; $_ });
456
457       All other values will give a warning and then fallback to "diag".
458
459       decode_utf8
460
461        my $csv = Text::CSV_XS->new ({ decode_utf8 => 1 });
462                $csv->decode_utf8 (0);
463        my $f = $csv->decode_utf8;
464
465       This attributes defaults to TRUE.
466
467       While parsing,  fields that are valid UTF-8, are automatically set to
468       be UTF-8, so that
469
470         $csv->parse ("\xC4\xA8\n");
471
472       results in
473
474         PV("\304\250"\0) [UTF8 "\x{128}"]
475
476       Sometimes it might not be a desired action.  To prevent those upgrades,
477       set this attribute to false, and the result will be
478
479         PV("\304\250"\0)
480
481       auto_diag
482
483        my $csv = Text::CSV_XS->new ({ auto_diag => 1 });
484                $csv->auto_diag (2);
485        my $l = $csv->auto_diag;
486
487       Set this attribute to a number between 1 and 9 causes  "error_diag" to
488       be automatically called in void context upon errors.
489
490       In case of error "2012 - EOF", this call will be void.
491
492       If "auto_diag" is set to a numeric value greater than 1, it will "die"
493       on errors instead of "warn".  If set to anything unrecognized,  it will
494       be silently ignored.
495
496       Future extensions to this feature will include more reliable auto-
497       detection of  "autodie"  being active in the scope of which the error
498       occurred which will increment the value of "auto_diag" with  1 the
499       moment the error is detected.
500
501       diag_verbose
502
503        my $csv = Text::CSV_XS->new ({ diag_verbose => 1 });
504                $csv->diag_verbose (2);
505        my $l = $csv->diag_verbose;
506
507       Set the verbosity of the output triggered by "auto_diag".   Currently
508       only adds the current  input-record-number  (if known)  to the
509       diagnostic output with an indication of the position of the error.
510
511       blank_is_undef
512
513        my $csv = Text::CSV_XS->new ({ blank_is_undef => 1 });
514                $csv->blank_is_undef (0);
515        my $f = $csv->blank_is_undef;
516
517       Under normal circumstances, "CSV" data makes no distinction between
518       quoted- and unquoted empty fields.  These both end up in an empty
519       string field once read, thus
520
521        1,"",," ",2
522
523       is read as
524
525        ("1", "", "", " ", "2")
526
527       When writing  "CSV" files with either  "always_quote" or  "quote_empty"
528       set, the unquoted  empty field is the result of an undefined value.
529       To enable this distinction when  reading "CSV"  data,  the
530       "blank_is_undef"  attribute will cause  unquoted empty fields to be set
531       to "undef", causing the above to be parsed as
532
533        ("1", "", undef, " ", "2")
534
535       Note that this is specifically important when loading  "CSV" fields
536       into a database that allows "NULL" values,  as the perl equivalent for
537       "NULL" is "undef" in DBI land.
538
539       empty_is_undef
540
541        my $csv = Text::CSV_XS->new ({ empty_is_undef => 1 });
542                $csv->empty_is_undef (0);
543        my $f = $csv->empty_is_undef;
544
545       Going one  step  further  than  "blank_is_undef",  this attribute
546       converts all empty fields to "undef", so
547
548        1,"",," ",2
549
550       is read as
551
552        (1, undef, undef, " ", 2)
553
554       Note that this affects only fields that are  originally  empty,  not
555       fields that are empty after stripping allowed whitespace. YMMV.
556
557       allow_whitespace
558
559        my $csv = Text::CSV_XS->new ({ allow_whitespace => 1 });
560                $csv->allow_whitespace (0);
561        my $f = $csv->allow_whitespace;
562
563       When this option is set to true,  the whitespace  ("TAB"'s and
564       "SPACE"'s) surrounding  the  separation character  is removed when
565       parsing.  If either "TAB" or "SPACE" is one of the three characters
566       "sep_char", "quote_char", or "escape_char" it will not be considered
567       whitespace.
568
569       Now lines like:
570
571        1 , "foo" , bar , 3 , zapp
572
573       are parsed as valid "CSV", even though it violates the "CSV" specs.
574
575       Note that  all  whitespace is stripped from both  start and  end of
576       each field.  That would make it  more than a feature to enable parsing
577       bad "CSV" lines, as
578
579        1,   2.0,  3,   ape  , monkey
580
581       will now be parsed as
582
583        ("1", "2.0", "3", "ape", "monkey")
584
585       even if the original line was perfectly acceptable "CSV".
586
587       allow_loose_quotes
588
589        my $csv = Text::CSV_XS->new ({ allow_loose_quotes => 1 });
590                $csv->allow_loose_quotes (0);
591        my $f = $csv->allow_loose_quotes;
592
593       By default, parsing unquoted fields containing "quote_char" characters
594       like
595
596        1,foo "bar" baz,42
597
598       would result in parse error 2034.  Though it is still bad practice to
599       allow this format,  we  cannot  help  the  fact  that  some  vendors
600       make  their applications spit out lines styled this way.
601
602       If there is really bad "CSV" data, like
603
604        1,"foo "bar" baz",42
605
606       or
607
608        1,""foo bar baz"",42
609
610       there is a way to get this data-line parsed and leave the quotes inside
611       the quoted field as-is.  This can be achieved by setting
612       "allow_loose_quotes" AND making sure that the "escape_char" is  not
613       equal to "quote_char".
614
615       allow_loose_escapes
616
617        my $csv = Text::CSV_XS->new ({ allow_loose_escapes => 1 });
618                $csv->allow_loose_escapes (0);
619        my $f = $csv->allow_loose_escapes;
620
621       Parsing fields  that  have  "escape_char"  characters that escape
622       characters that do not need to be escaped, like:
623
624        my $csv = Text::CSV_XS->new ({ escape_char => "\\" });
625        $csv->parse (qq{1,"my bar\'s",baz,42});
626
627       would result in parse error 2025.   Though it is bad practice to allow
628       this format,  this attribute enables you to treat all escape character
629       sequences equal.
630
631       allow_unquoted_escape
632
633        my $csv = Text::CSV_XS->new ({ allow_unquoted_escape => 1 });
634                $csv->allow_unquoted_escape (0);
635        my $f = $csv->allow_unquoted_escape;
636
637       A backward compatibility issue where "escape_char" differs from
638       "quote_char"  prevents  "escape_char" to be in the first position of a
639       field.  If "quote_char" is equal to the default """ and "escape_char"
640       is set to "\", this would be illegal:
641
642        1,\0,2
643
644       Setting this attribute to 1  might help to overcome issues with
645       backward compatibility and allow this style.
646
647       always_quote
648
649        my $csv = Text::CSV_XS->new ({ always_quote => 1 });
650                $csv->always_quote (0);
651        my $f = $csv->always_quote;
652
653       By default the generated fields are quoted only if they need to be.
654       For example, if they contain the separator character. If you set this
655       attribute to 1 then all defined fields will be quoted. ("undef" fields
656       are not quoted, see "blank_is_undef"). This makes it quite often easier
657       to handle exported data in external applications.   (Poor creatures who
658       are better to use Text::CSV_XS. :)
659
660       quote_space
661
662        my $csv = Text::CSV_XS->new ({ quote_space => 1 });
663                $csv->quote_space (0);
664        my $f = $csv->quote_space;
665
666       By default,  a space in a field would trigger quotation.  As no rule
667       exists this to be forced in "CSV",  nor any for the opposite, the
668       default is true for safety.   You can exclude the space  from this
669       trigger  by setting this attribute to 0.
670
671       quote_empty
672
673        my $csv = Text::CSV_XS->new ({ quote_empty => 1 });
674                $csv->quote_empty (0);
675        my $f = $csv->quote_empty;
676
677       By default the generated fields are quoted only if they need to be.
678       An empty (defined) field does not need quotation. If you set this
679       attribute to 1 then empty defined fields will be quoted.  ("undef"
680       fields are not quoted, see "blank_is_undef"). See also "always_quote".
681
682       quote_binary
683
684        my $csv = Text::CSV_XS->new ({ quote_binary => 1 });
685                $csv->quote_binary (0);
686        my $f = $csv->quote_binary;
687
688       By default,  all "unsafe" bytes inside a string cause the combined
689       field to be quoted.  By setting this attribute to 0, you can disable
690       that trigger for bytes >= 0x7F.
691
692       escape_null
693
694        my $csv = Text::CSV_XS->new ({ escape_null => 1 });
695                $csv->escape_null (0);
696        my $f = $csv->escape_null;
697
698       By default, a "NULL" byte in a field would be escaped. This option
699       enables you to treat the  "NULL"  byte as a simple binary character in
700       binary mode (the "{ binary => 1 }" is set).  The default is true.  You
701       can prevent "NULL" escapes by setting this attribute to 0.
702
703       When the "escape_char" attribute is set to undefined,  this attribute
704       will be set to false.
705
706       The default setting will encode "=\x00=" as
707
708        "="0="
709
710       With "escape_null" set, this will result in
711
712        "=\x00="
713
714       The default when using the "csv" function is "false".
715
716       For backward compatibility reasons,  the deprecated old name
717       "quote_null" is still recognized.
718
719       keep_meta_info
720
721        my $csv = Text::CSV_XS->new ({ keep_meta_info => 1 });
722                $csv->keep_meta_info (0);
723        my $f = $csv->keep_meta_info;
724
725       By default, the parsing of input records is as simple and fast as
726       possible.  However,  some parsing information - like quotation of the
727       original field - is lost in that process.  Setting this flag to true
728       enables retrieving that information after parsing with  the methods
729       "meta_info",  "is_quoted", and "is_binary" described below.  Default is
730       false for performance.
731
732       If you set this attribute to a value greater than 9,   then you can
733       control output quotation style like it was used in the input of the the
734       last parsed record (unless quotation was added because of other
735       reasons).
736
737        my $csv = Text::CSV_XS->new ({
738           binary         => 1,
739           keep_meta_info => 1,
740           quote_space    => 0,
741           });
742
743        my $row = $csv->parse (q{1,,"", ," ",f,"g","h""h",help,"help"});
744
745        $csv->print (*STDOUT, \@row);
746        # 1,,, , ,f,g,"h""h",help,help
747        $csv->keep_meta_info (11);
748        $csv->print (*STDOUT, \@row);
749        # 1,,"", ," ",f,"g","h""h",help,"help"
750
751       undef_str
752
753        my $csv = Text::CSV_XS->new ({ undef_str => "\\N" });
754                $csv->undef_str (undef);
755        my $s = $csv->undef_str;
756
757       This attribute optionally defines the output of undefined fields. The
758       value passed is not changed at all, so if it needs quotation, the
759       quotation needs to be included in the value of the attribute.  Use with
760       caution, as passing a value like  ",",,,,"""  will for sure mess up
761       your output. The default for this attribute is "undef", meaning no
762       special treatment.
763
764       This attribute is useful when exporting  CSV data  to be imported in
765       custom loaders, like for MySQL, that recognize special sequences for
766       "NULL" data.
767
768       This attribute has no meaning when parsing CSV data.
769
770       comment_str
771
772        my $csv = Text::CSV_XS->new ({ comment_str => "#" });
773                $csv->comment_str (undef);
774        my $s = $csv->comment_str;
775
776       This attribute optionally defines a string to be recognized as comment.
777       If this attribute is defined,   all lines starting with this sequence
778       will not be parsed as CSV but skipped as comment.
779
780       This attribute has no meaning when generating CSV.
781
782       Comment strings that start with any of the special characters/sequences
783       are not supported (so it cannot start with any of "sep_char",
784       "quote_char", "escape_char", "sep", "quote", or "eol").
785
786       For convenience, "comment" is an alias for "comment_str".
787
788       verbatim
789
790        my $csv = Text::CSV_XS->new ({ verbatim => 1 });
791                $csv->verbatim (0);
792        my $f = $csv->verbatim;
793
794       This is a quite controversial attribute to set,  but makes some hard
795       things possible.
796
797       The rationale behind this attribute is to tell the parser that the
798       normally special characters newline ("NL") and Carriage Return ("CR")
799       will not be special when this flag is set,  and be dealt with  as being
800       ordinary binary characters. This will ease working with data with
801       embedded newlines.
802
803       When  "verbatim"  is used with  "getline",  "getline"  auto-"chomp"'s
804       every line.
805
806       Imagine a file format like
807
808        M^^Hans^Janssen^Klas 2\n2A^Ja^11-06-2007#\r\n
809
810       where, the line ending is a very specific "#\r\n", and the sep_char is
811       a "^" (caret).   None of the fields is quoted,   but embedded binary
812       data is likely to be present. With the specific line ending, this
813       should not be too hard to detect.
814
815       By default,  Text::CSV_XS'  parse function is instructed to only know
816       about "\n" and "\r"  to be legal line endings,  and so has to deal with
817       the embedded newline as a real "end-of-line",  so it can scan the next
818       line if binary is true, and the newline is inside a quoted field. With
819       this option, we tell "parse" to parse the line as if "\n" is just
820       nothing more than a binary character.
821
822       For "parse" this means that the parser has no more idea about line
823       ending and "getline" "chomp"s line endings on reading.
824
825       types
826
827       A set of column types; the attribute is immediately passed to the
828       "types" method.
829
830       callbacks
831
832       See the "Callbacks" section below.
833
834       accessors
835
836       To sum it up,
837
838        $csv = Text::CSV_XS->new ();
839
840       is equivalent to
841
842        $csv = Text::CSV_XS->new ({
843            eol                   => undef, # \r, \n, or \r\n
844            sep_char              => ',',
845            sep                   => undef,
846            quote_char            => '"',
847            quote                 => undef,
848            escape_char           => '"',
849            binary                => 0,
850            decode_utf8           => 1,
851            auto_diag             => 0,
852            diag_verbose          => 0,
853            blank_is_undef        => 0,
854            empty_is_undef        => 0,
855            allow_whitespace      => 0,
856            allow_loose_quotes    => 0,
857            allow_loose_escapes   => 0,
858            allow_unquoted_escape => 0,
859            always_quote          => 0,
860            quote_empty           => 0,
861            quote_space           => 1,
862            escape_null           => 1,
863            quote_binary          => 1,
864            keep_meta_info        => 0,
865            strict                => 0,
866            skip_empty_rows       => 0,
867            formula               => 0,
868            verbatim              => 0,
869            undef_str             => undef,
870            comment_str           => undef,
871            types                 => undef,
872            callbacks             => undef,
873            });
874
875       For all of the above mentioned flags, an accessor method is available
876       where you can inquire the current value, or change the value
877
878        my $quote = $csv->quote_char;
879        $csv->binary (1);
880
881       It is not wise to change these settings halfway through writing "CSV"
882       data to a stream. If however you want to create a new stream using the
883       available "CSV" object, there is no harm in changing them.
884
885       If the "new" constructor call fails,  it returns "undef",  and makes
886       the fail reason available through the "error_diag" method.
887
888        $csv = Text::CSV_XS->new ({ ecs_char => 1 }) or
889            die "".Text::CSV_XS->error_diag ();
890
891       "error_diag" will return a string like
892
893        "INI - Unknown attribute 'ecs_char'"
894
895   known_attributes
896        @attr = Text::CSV_XS->known_attributes;
897        @attr = Text::CSV_XS::known_attributes;
898        @attr = $csv->known_attributes;
899
900       This method will return an ordered list of all the supported
901       attributes as described above.   This can be useful for knowing what
902       attributes are valid in classes that use or extend Text::CSV_XS.
903
904   print
905        $status = $csv->print ($fh, $colref);
906
907       Similar to  "combine" + "string" + "print",  but much more efficient.
908       It expects an array ref as input  (not an array!)  and the resulting
909       string is not really  created,  but  immediately  written  to the  $fh
910       object, typically an IO handle or any other object that offers a
911       "print" method.
912
913       For performance reasons  "print"  does not create a result string,  so
914       all "string", "status", "fields", and "error_input" methods will return
915       undefined information after executing this method.
916
917       If $colref is "undef"  (explicit,  not through a variable argument) and
918       "bind_columns"  was used to specify fields to be printed,  it is
919       possible to make performance improvements, as otherwise data would have
920       to be copied as arguments to the method call:
921
922        $csv->bind_columns (\($foo, $bar));
923        $status = $csv->print ($fh, undef);
924
925       A short benchmark
926
927        my @data = ("aa" .. "zz");
928        $csv->bind_columns (\(@data));
929
930        $csv->print ($fh, [ @data ]);   # 11800 recs/sec
931        $csv->print ($fh,  \@data  );   # 57600 recs/sec
932        $csv->print ($fh,   undef  );   # 48500 recs/sec
933
934   say
935        $status = $csv->say ($fh, $colref);
936
937       Like "print", but "eol" defaults to "$\".
938
939   print_hr
940        $csv->print_hr ($fh, $ref);
941
942       Provides an easy way  to print a  $ref  (as fetched with "getline_hr")
943       provided the column names are set with "column_names".
944
945       It is just a wrapper method with basic parameter checks over
946
947        $csv->print ($fh, [ map { $ref->{$_} } $csv->column_names ]);
948
949   combine
950        $status = $csv->combine (@fields);
951
952       This method constructs a "CSV" record from  @fields,  returning success
953       or failure.   Failure can result from lack of arguments or an argument
954       that contains an invalid character.   Upon success,  "string" can be
955       called to retrieve the resultant "CSV" string.  Upon failure,  the
956       value returned by "string" is undefined and "error_input" could be
957       called to retrieve the invalid argument.
958
959   string
960        $line = $csv->string ();
961
962       This method returns the input to  "parse"  or the resultant "CSV"
963       string of "combine", whichever was called more recently.
964
965   getline
966        $colref = $csv->getline ($fh);
967
968       This is the counterpart to  "print",  as "parse"  is the counterpart to
969       "combine":  it parses a row from the $fh  handle using the "getline"
970       method associated with $fh  and parses this row into an array ref.
971       This array ref is returned by the function or "undef" for failure.
972       When $fh does not support "getline", you are likely to hit errors.
973
974       When fields are bound with "bind_columns" the return value is a
975       reference to an empty list.
976
977       The "string", "fields", and "status" methods are meaningless again.
978
979   getline_all
980        $arrayref = $csv->getline_all ($fh);
981        $arrayref = $csv->getline_all ($fh, $offset);
982        $arrayref = $csv->getline_all ($fh, $offset, $length);
983
984       This will return a reference to a list of getline ($fh) results.  In
985       this call, "keep_meta_info" is disabled.  If $offset is negative, as
986       with "splice", only the last  "abs ($offset)" records of $fh are taken
987       into consideration.
988
989       Given a CSV file with 10 lines:
990
991        lines call
992        ----- ---------------------------------------------------------
993        0..9  $csv->getline_all ($fh)         # all
994        0..9  $csv->getline_all ($fh,  0)     # all
995        8..9  $csv->getline_all ($fh,  8)     # start at 8
996        -     $csv->getline_all ($fh,  0,  0) # start at 0 first 0 rows
997        0..4  $csv->getline_all ($fh,  0,  5) # start at 0 first 5 rows
998        4..5  $csv->getline_all ($fh,  4,  2) # start at 4 first 2 rows
999        8..9  $csv->getline_all ($fh, -2)     # last 2 rows
1000        6..7  $csv->getline_all ($fh, -4,  2) # first 2 of last  4 rows
1001
1002   getline_hr
1003       The "getline_hr" and "column_names" methods work together  to allow you
1004       to have rows returned as hashrefs.  You must call "column_names" first
1005       to declare your column names.
1006
1007        $csv->column_names (qw( code name price description ));
1008        $hr = $csv->getline_hr ($fh);
1009        print "Price for $hr->{name} is $hr->{price} EUR\n";
1010
1011       "getline_hr" will croak if called before "column_names".
1012
1013       Note that  "getline_hr"  creates a hashref for every row and will be
1014       much slower than the combined use of "bind_columns"  and "getline" but
1015       still offering the same easy to use hashref inside the loop:
1016
1017        my @cols = @{$csv->getline ($fh)};
1018        $csv->column_names (@cols);
1019        while (my $row = $csv->getline_hr ($fh)) {
1020            print $row->{price};
1021            }
1022
1023       Could easily be rewritten to the much faster:
1024
1025        my @cols = @{$csv->getline ($fh)};
1026        my $row = {};
1027        $csv->bind_columns (\@{$row}{@cols});
1028        while ($csv->getline ($fh)) {
1029            print $row->{price};
1030            }
1031
1032       Your mileage may vary for the size of the data and the number of rows.
1033       With perl-5.14.2 the comparison for a 100_000 line file with 14
1034       columns:
1035
1036                   Rate hashrefs getlines
1037        hashrefs 1.00/s       --     -76%
1038        getlines 4.15/s     313%       --
1039
1040   getline_hr_all
1041        $arrayref = $csv->getline_hr_all ($fh);
1042        $arrayref = $csv->getline_hr_all ($fh, $offset);
1043        $arrayref = $csv->getline_hr_all ($fh, $offset, $length);
1044
1045       This will return a reference to a list of   getline_hr ($fh) results.
1046       In this call, "keep_meta_info" is disabled.
1047
1048   parse
1049        $status = $csv->parse ($line);
1050
1051       This method decomposes a  "CSV"  string into fields,  returning success
1052       or failure.   Failure can result from a lack of argument  or the given
1053       "CSV" string is improperly formatted.   Upon success, "fields" can be
1054       called to retrieve the decomposed fields. Upon failure calling "fields"
1055       will return undefined data and  "error_input"  can be called to
1056       retrieve  the invalid argument.
1057
1058       You may use the "types"  method for setting column types.  See "types"'
1059       description below.
1060
1061       The $line argument is supposed to be a simple scalar. Everything else
1062       is supposed to croak and set error 1500.
1063
1064   fragment
1065       This function tries to implement RFC7111  (URI Fragment Identifiers for
1066       the text/csv Media Type) -
1067       https://datatracker.ietf.org/doc/html/rfc7111
1068
1069        my $AoA = $csv->fragment ($fh, $spec);
1070
1071       In specifications,  "*" is used to specify the last item, a dash ("-")
1072       to indicate a range.   All indices are 1-based:  the first row or
1073       column has index 1. Selections can be combined with the semi-colon
1074       (";").
1075
1076       When using this method in combination with  "column_names",  the
1077       returned reference  will point to a  list of hashes  instead of a  list
1078       of lists.  A disjointed  cell-based combined selection  might return
1079       rows with different number of columns making the use of hashes
1080       unpredictable.
1081
1082        $csv->column_names ("Name", "Age");
1083        my $AoH = $csv->fragment ($fh, "col=3;8");
1084
1085       If the "after_parse" callback is active,  it is also called on every
1086       line parsed and skipped before the fragment.
1087
1088       row
1089          row=4
1090          row=5-7
1091          row=6-*
1092          row=1-2;4;6-*
1093
1094       col
1095          col=2
1096          col=1-3
1097          col=4-*
1098          col=1-2;4;7-*
1099
1100       cell
1101         In cell-based selection, the comma (",") is used to pair row and
1102         column
1103
1104          cell=4,1
1105
1106         The range operator ("-") using "cell"s can be used to define top-left
1107         and bottom-right "cell" location
1108
1109          cell=3,1-4,6
1110
1111         The "*" is only allowed in the second part of a pair
1112
1113          cell=3,2-*,2    # row 3 till end, only column 2
1114          cell=3,2-3,*    # column 2 till end, only row 3
1115          cell=3,2-*,*    # strip row 1 and 2, and column 1
1116
1117         Cells and cell ranges may be combined with ";", possibly resulting in
1118         rows with different numbers of columns
1119
1120          cell=1,1-2,2;3,3-4,4;1,4;4,1
1121
1122         Disjointed selections will only return selected cells.   The cells
1123         that are not  specified  will  not  be  included  in the  returned
1124         set,  not even as "undef".  As an example given a "CSV" like
1125
1126          11,12,13,...19
1127          21,22,...28,29
1128          :            :
1129          91,...97,98,99
1130
1131         with "cell=1,1-2,2;3,3-4,4;1,4;4,1" will return:
1132
1133          11,12,14
1134          21,22
1135          33,34
1136          41,43,44
1137
1138         Overlapping cell-specs will return those cells only once, So
1139         "cell=1,1-3,3;2,2-4,4;2,3;4,2" will return:
1140
1141          11,12,13
1142          21,22,23,24
1143          31,32,33,34
1144          42,43,44
1145
1146       RFC7111 <https://datatracker.ietf.org/doc/html/rfc7111> does  not
1147       allow different types of specs to be combined   (either "row" or "col"
1148       or "cell").  Passing an invalid fragment specification will croak and
1149       set error 2013.
1150
1151   column_names
1152       Set the "keys" that will be used in the  "getline_hr"  calls.  If no
1153       keys (column names) are passed, it will return the current setting as a
1154       list.
1155
1156       "column_names" accepts a list of scalars  (the column names)  or a
1157       single array_ref, so you can pass the return value from "getline" too:
1158
1159        $csv->column_names ($csv->getline ($fh));
1160
1161       "column_names" does no checking on duplicates at all, which might lead
1162       to unexpected results.   Undefined entries will be replaced with the
1163       string "\cAUNDEF\cA", so
1164
1165        $csv->column_names (undef, "", "name", "name");
1166        $hr = $csv->getline_hr ($fh);
1167
1168       will set "$hr->{"\cAUNDEF\cA"}" to the 1st field,  "$hr->{""}" to the
1169       2nd field, and "$hr->{name}" to the 4th field,  discarding the 3rd
1170       field.
1171
1172       "column_names" croaks on invalid arguments.
1173
1174   header
1175       This method does NOT work in perl-5.6.x
1176
1177       Parse the CSV header and set "sep", column_names and encoding.
1178
1179        my @hdr = $csv->header ($fh);
1180        $csv->header ($fh, { sep_set => [ ";", ",", "|", "\t" ] });
1181        $csv->header ($fh, { detect_bom => 1, munge_column_names => "lc" });
1182
1183       The first argument should be a file handle.
1184
1185       This method resets some object properties,  as it is supposed to be
1186       invoked only once per file or stream.  It will leave attributes
1187       "column_names" and "bound_columns" alone if setting column names is
1188       disabled. Reading headers on previously process objects might fail on
1189       perl-5.8.0 and older.
1190
1191       Assuming that the file opened for parsing has a header, and the header
1192       does not contain problematic characters like embedded newlines,   read
1193       the first line from the open handle then auto-detect whether the header
1194       separates the column names with a character from the allowed separator
1195       list.
1196
1197       If any of the allowed separators matches,  and none of the other
1198       allowed separators match,  set  "sep"  to that  separator  for the
1199       current CSV_XS instance and use it to parse the first line, map those
1200       to lowercase, and use that to set the instance "column_names":
1201
1202        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
1203        open my $fh, "<", "file.csv";
1204        binmode $fh; # for Windows
1205        $csv->header ($fh);
1206        while (my $row = $csv->getline_hr ($fh)) {
1207            ...
1208            }
1209
1210       If the header is empty,  contains more than one unique separator out of
1211       the allowed set,  contains empty fields,   or contains identical fields
1212       (after folding), it will croak with error 1010, 1011, 1012, or 1013
1213       respectively.
1214
1215       If the header contains embedded newlines or is not valid  CSV  in any
1216       other way, this method will croak and leave the parse error untouched.
1217
1218       A successful call to "header"  will always set the  "sep"  of the $csv
1219       object. This behavior can not be disabled.
1220
1221       return value
1222
1223       On error this method will croak.
1224
1225       In list context,  the headers will be returned whether they are used to
1226       set "column_names" or not.
1227
1228       In scalar context, the instance itself is returned.  Note: the values
1229       as found in the header will effectively be  lost if  "set_column_names"
1230       is false.
1231
1232       Options
1233
1234       sep_set
1235          $csv->header ($fh, { sep_set => [ ";", ",", "|", "\t" ] });
1236
1237         The list of legal separators defaults to "[ ";", "," ]" and can be
1238         changed by this option.  As this is probably the most often used
1239         option,  it can be passed on its own as an unnamed argument:
1240
1241          $csv->header ($fh, [ ";", ",", "|", "\t", "::", "\x{2063}" ]);
1242
1243         Multi-byte  sequences are allowed,  both multi-character and
1244         Unicode.  See "sep".
1245
1246       detect_bom
1247          $csv->header ($fh, { detect_bom => 1 });
1248
1249         The default behavior is to detect if the header line starts with a
1250         BOM.  If the header has a BOM, use that to set the encoding of $fh.
1251         This default behavior can be disabled by passing a false value to
1252         "detect_bom".
1253
1254         Supported encodings from BOM are: UTF-8, UTF-16BE, UTF-16LE,
1255         UTF-32BE,  and UTF-32LE. BOM also supports UTF-1, UTF-EBCDIC, SCSU,
1256         BOCU-1,  and GB-18030 but Encode does not (yet). UTF-7 is not
1257         supported.
1258
1259         If a supported BOM was detected as start of the stream, it is stored
1260         in the object attribute "ENCODING".
1261
1262          my $enc = $csv->{ENCODING};
1263
1264         The encoding is used with "binmode" on $fh.
1265
1266         If the handle was opened in a (correct) encoding,  this method will
1267         not alter the encoding, as it checks the leading bytes of the first
1268         line. In case the stream starts with a decoded BOM ("U+FEFF"),
1269         "{ENCODING}" will be "" (empty) instead of the default "undef".
1270
1271       munge_column_names
1272         This option offers the means to modify the column names into
1273         something that is most useful to the application.   The default is to
1274         map all column names to lower case.
1275
1276          $csv->header ($fh, { munge_column_names => "lc" });
1277
1278         The following values are available:
1279
1280           lc     - lower case
1281           uc     - upper case
1282           db     - valid DB field names
1283           none   - do not change
1284           \%hash - supply a mapping
1285           \&cb   - supply a callback
1286
1287         Lower case
1288            $csv->header ($fh, { munge_column_names => "lc" });
1289
1290           The header is changed to all lower-case
1291
1292            $_ = lc;
1293
1294         Upper case
1295            $csv->header ($fh, { munge_column_names => "uc" });
1296
1297           The header is changed to all upper-case
1298
1299            $_ = uc;
1300
1301         Literal
1302            $csv->header ($fh, { munge_column_names => "none" });
1303
1304         Hash
1305            $csv->header ($fh, { munge_column_names => { foo => "sombrero" });
1306
1307           if a value does not exist, the original value is used unchanged
1308
1309         Database
1310            $csv->header ($fh, { munge_column_names => "db" });
1311
1312           - lower-case
1313
1314           - all sequences of non-word characters are replaced with an
1315             underscore
1316
1317           - all leading underscores are removed
1318
1319            $_ = lc (s/\W+/_/gr =~ s/^_+//r);
1320
1321         Callback
1322            $csv->header ($fh, { munge_column_names => sub { fc } });
1323            $csv->header ($fh, { munge_column_names => sub { "column_".$col++ } });
1324            $csv->header ($fh, { munge_column_names => sub { lc (s/\W+/_/gr) } });
1325
1326           As this callback is called in a "map", you can use $_ directly.
1327
1328       set_column_names
1329          $csv->header ($fh, { set_column_names => 1 });
1330
1331         The default is to set the instances column names using
1332         "column_names" if the method is successful,  so subsequent calls to
1333         "getline_hr" can return a hash. Disable setting the header can be
1334         forced by using a false value for this option.
1335
1336         As described in "return value" above, content is lost in scalar
1337         context.
1338
1339       Validation
1340
1341       When receiving CSV files from external sources,  this method can be
1342       used to protect against changes in the layout by restricting to known
1343       headers  (and typos in the header fields).
1344
1345        my %known = (
1346            "record key" => "c_rec",
1347            "rec id"     => "c_rec",
1348            "id_rec"     => "c_rec",
1349            "kode"       => "code",
1350            "code"       => "code",
1351            "vaule"      => "value",
1352            "value"      => "value",
1353            );
1354        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
1355        open my $fh, "<", $source or die "$source: $!";
1356        $csv->header ($fh, { munge_column_names => sub {
1357            s/\s+$//;
1358            s/^\s+//;
1359            $known{lc $_} or die "Unknown column '$_' in $source";
1360            }});
1361        while (my $row = $csv->getline_hr ($fh)) {
1362            say join "\t", $row->{c_rec}, $row->{code}, $row->{value};
1363            }
1364
1365   bind_columns
1366       Takes a list of scalar references to be used for output with  "print"
1367       or to store in the fields fetched by "getline".  When you do not pass
1368       enough references to store the fetched fields in, "getline" will fail
1369       with error 3006.  If you pass more than there are fields to return,
1370       the content of the remaining references is left untouched.
1371
1372        $csv->bind_columns (\$code, \$name, \$price, \$description);
1373        while ($csv->getline ($fh)) {
1374            print "The price of a $name is \x{20ac} $price\n";
1375            }
1376
1377       To reset or clear all column binding, call "bind_columns" with the
1378       single argument "undef". This will also clear column names.
1379
1380        $csv->bind_columns (undef);
1381
1382       If no arguments are passed at all, "bind_columns" will return the list
1383       of current bindings or "undef" if no binds are active.
1384
1385       Note that in parsing with  "bind_columns",  the fields are set on the
1386       fly.  That implies that if the third field of a row causes an error
1387       (or this row has just two fields where the previous row had more),  the
1388       first two fields already have been assigned the values of the current
1389       row, while the rest of the fields will still hold the values of the
1390       previous row.  If you want the parser to fail in these cases, use the
1391       "strict" attribute.
1392
1393   eof
1394        $eof = $csv->eof ();
1395
1396       If "parse" or  "getline"  was used with an IO stream,  this method will
1397       return true (1) if the last call hit end of file,  otherwise it will
1398       return false ('').  This is useful to see the difference between a
1399       failure and end of file.
1400
1401       Note that if the parsing of the last line caused an error,  "eof" is
1402       still true.  That means that if you are not using "auto_diag", an idiom
1403       like
1404
1405        while (my $row = $csv->getline ($fh)) {
1406            # ...
1407            }
1408        $csv->eof or $csv->error_diag;
1409
1410       will not report the error. You would have to change that to
1411
1412        while (my $row = $csv->getline ($fh)) {
1413            # ...
1414            }
1415        +$csv->error_diag and $csv->error_diag;
1416
1417   types
1418        $csv->types (\@tref);
1419
1420       This method is used to force that  (all)  columns are of a given type.
1421       For example, if you have an integer column,  two  columns  with
1422       doubles  and a string column, then you might do a
1423
1424        $csv->types ([Text::CSV_XS::IV (),
1425                      Text::CSV_XS::NV (),
1426                      Text::CSV_XS::NV (),
1427                      Text::CSV_XS::PV ()]);
1428
1429       Column types are used only for decoding columns while parsing,  in
1430       other words by the "parse" and "getline" methods.
1431
1432       You can unset column types by doing a
1433
1434        $csv->types (undef);
1435
1436       or fetch the current type settings with
1437
1438        $types = $csv->types ();
1439
1440       IV  Set field type to integer.
1441
1442       NV  Set field type to numeric/float.
1443
1444       PV  Set field type to string.
1445
1446   fields
1447        @columns = $csv->fields ();
1448
1449       This method returns the input to   "combine"  or the resultant
1450       decomposed fields of a successful "parse", whichever was called more
1451       recently.
1452
1453       Note that the return value is undefined after using "getline", which
1454       does not fill the data structures returned by "parse".
1455
1456   meta_info
1457        @flags = $csv->meta_info ();
1458
1459       This method returns the "flags" of the input to "combine" or the flags
1460       of the resultant  decomposed fields of  "parse",   whichever was called
1461       more recently.
1462
1463       For each field,  a meta_info field will hold  flags that  inform
1464       something about  the  field  returned  by  the  "fields"  method or
1465       passed to  the "combine" method. The flags are bit-wise-"or"'d like:
1466
1467       " "0x0001
1468         The field was quoted.
1469
1470       " "0x0002
1471         The field was binary.
1472
1473       See the "is_***" methods below.
1474
1475   is_quoted
1476        my $quoted = $csv->is_quoted ($column_idx);
1477
1478       where  $column_idx is the  (zero-based)  index of the column in the
1479       last result of "parse".
1480
1481       This returns a true value  if the data in the indicated column was
1482       enclosed in "quote_char" quotes.  This might be important for fields
1483       where content ",20070108," is to be treated as a numeric value,  and
1484       where ","20070108"," is explicitly marked as character string data.
1485
1486       This method is only valid when "keep_meta_info" is set to a true value.
1487
1488   is_binary
1489        my $binary = $csv->is_binary ($column_idx);
1490
1491       where  $column_idx is the  (zero-based)  index of the column in the
1492       last result of "parse".
1493
1494       This returns a true value if the data in the indicated column contained
1495       any byte in the range "[\x00-\x08,\x10-\x1F,\x7F-\xFF]".
1496
1497       This method is only valid when "keep_meta_info" is set to a true value.
1498
1499   is_missing
1500        my $missing = $csv->is_missing ($column_idx);
1501
1502       where  $column_idx is the  (zero-based)  index of the column in the
1503       last result of "getline_hr".
1504
1505        $csv->keep_meta_info (1);
1506        while (my $hr = $csv->getline_hr ($fh)) {
1507            $csv->is_missing (0) and next; # This was an empty line
1508            }
1509
1510       When using  "getline_hr",  it is impossible to tell if the  parsed
1511       fields are "undef" because they where not filled in the "CSV" stream
1512       or because they were not read at all, as all the fields defined by
1513       "column_names" are set in the hash-ref.    If you still need to know if
1514       all fields in each row are provided, you should enable "keep_meta_info"
1515       so you can check the flags.
1516
1517       If  "keep_meta_info"  is "false",  "is_missing"  will always return
1518       "undef", regardless of $column_idx being valid or not. If this
1519       attribute is "true" it will return either 0 (the field is present) or 1
1520       (the field is missing).
1521
1522       A special case is the empty line.  If the line is completely empty -
1523       after dealing with the flags - this is still a valid CSV line:  it is a
1524       record of just one single empty field. However, if "keep_meta_info" is
1525       set, invoking "is_missing" with index 0 will now return true.
1526
1527   status
1528        $status = $csv->status ();
1529
1530       This method returns the status of the last invoked "combine" or "parse"
1531       call. Status is success (true: 1) or failure (false: "undef" or 0).
1532
1533       Note that as this only keeps track of the status of above mentioned
1534       methods, you are probably looking for "error_diag" instead.
1535
1536   error_input
1537        $bad_argument = $csv->error_input ();
1538
1539       This method returns the erroneous argument (if it exists) of "combine"
1540       or "parse",  whichever was called more recently.  If the last
1541       invocation was successful, "error_input" will return "undef".
1542
1543       Depending on the type of error, it might also hold the data for the
1544       last error-input of "getline".
1545
1546   error_diag
1547        Text::CSV_XS->error_diag ();
1548        $csv->error_diag ();
1549        $error_code               = 0  + $csv->error_diag ();
1550        $error_str                = "" . $csv->error_diag ();
1551        ($cde, $str, $pos, $rec, $fld) = $csv->error_diag ();
1552
1553       If (and only if) an error occurred,  this function returns  the
1554       diagnostics of that error.
1555
1556       If called in void context,  this will print the internal error code and
1557       the associated error message to STDERR.
1558
1559       If called in list context,  this will return  the error code  and the
1560       error message in that order.  If the last error was from parsing, the
1561       rest of the values returned are a best guess at the location  within
1562       the line  that was being parsed. Their values are 1-based.  The
1563       position currently is index of the byte at which the parsing failed in
1564       the current record. It might change to be the index of the current
1565       character in a later release. The records is the index of the record
1566       parsed by the csv instance. The field number is the index of the field
1567       the parser thinks it is currently  trying to  parse. See
1568       examples/csv-check for how this can be used.
1569
1570       If called in  scalar context,  it will return  the diagnostics  in a
1571       single scalar, a-la $!.  It will contain the error code in numeric
1572       context, and the diagnostics message in string context.
1573
1574       When called as a class method or a  direct function call,  the
1575       diagnostics are that of the last "new" call.
1576
1577   record_number
1578        $recno = $csv->record_number ();
1579
1580       Returns the records parsed by this csv instance.  This value should be
1581       more accurate than $. when embedded newlines come in play. Records
1582       written by this instance are not counted.
1583
1584   SetDiag
1585        $csv->SetDiag (0);
1586
1587       Use to reset the diagnostics if you are dealing with errors.
1588

FUNCTIONS

1590   csv
1591       This function is not exported by default and should be explicitly
1592       requested:
1593
1594        use Text::CSV_XS qw( csv );
1595
1596       This is a high-level function that aims at simple (user) interfaces.
1597       This can be used to read/parse a "CSV" file or stream (the default
1598       behavior) or to produce a file or write to a stream (define the  "out"
1599       attribute).  It returns an array- or hash-reference on parsing (or
1600       "undef" on fail) or the numeric value of  "error_diag"  on writing.
1601       When this function fails you can get to the error using the class call
1602       to "error_diag"
1603
1604        my $aoa = csv (in => "test.csv") or
1605            die Text::CSV_XS->error_diag;
1606
1607       This function takes the arguments as key-value pairs. This can be
1608       passed as a list or as an anonymous hash:
1609
1610        my $aoa = csv (  in => "test.csv", sep_char => ";");
1611        my $aoh = csv ({ in => $fh, headers => "auto" });
1612
1613       The arguments passed consist of two parts:  the arguments to "csv"
1614       itself and the optional attributes to the  "CSV"  object used inside
1615       the function as enumerated and explained in "new".
1616
1617       If not overridden, the default option used for CSV is
1618
1619        auto_diag   => 1
1620        escape_null => 0
1621
1622       The option that is always set and cannot be altered is
1623
1624        binary      => 1
1625
1626       As this function will likely be used in one-liners,  it allows  "quote"
1627       to be abbreviated as "quo",  and  "escape_char" to be abbreviated as
1628       "esc" or "escape".
1629
1630       Alternative invocations:
1631
1632        my $aoa = Text::CSV_XS::csv (in => "file.csv");
1633
1634        my $csv = Text::CSV_XS->new ();
1635        my $aoa = $csv->csv (in => "file.csv");
1636
1637       In the latter case, the object attributes are used from the existing
1638       object and the attribute arguments in the function call are ignored:
1639
1640        my $csv = Text::CSV_XS->new ({ sep_char => ";" });
1641        my $aoh = $csv->csv (in => "file.csv", bom => 1);
1642
1643       will parse using ";" as "sep_char", not ",".
1644
1645       in
1646
1647       Used to specify the source.  "in" can be a file name (e.g. "file.csv"),
1648       which will be  opened for reading  and closed when finished,  a file
1649       handle (e.g.  $fh or "FH"),  a reference to a glob (e.g. "\*ARGV"),
1650       the glob itself (e.g. *STDIN), or a reference to a scalar (e.g.
1651       "\q{1,2,"csv"}").
1652
1653       When used with "out", "in" should be a reference to a CSV structure
1654       (AoA or AoH)  or a CODE-ref that returns an array-reference or a hash-
1655       reference.  The code-ref will be invoked with no arguments.
1656
1657        my $aoa = csv (in => "file.csv");
1658
1659        open my $fh, "<", "file.csv";
1660        my $aoa = csv (in => $fh);
1661
1662        my $csv = [ [qw( Foo Bar )], [ 1, 2 ], [ 2, 3 ]];
1663        my $err = csv (in => $csv, out => "file.csv");
1664
1665       If called in void context without the "out" attribute, the resulting
1666       ref will be used as input to a subsequent call to csv:
1667
1668        csv (in => "file.csv", filter => { 2 => sub { length > 2 }})
1669
1670       will be a shortcut to
1671
1672        csv (in => csv (in => "file.csv", filter => { 2 => sub { length > 2 }}))
1673
1674       where, in the absence of the "out" attribute, this is a shortcut to
1675
1676        csv (in  => csv (in => "file.csv", filter => { 2 => sub { length > 2 }}),
1677             out => *STDOUT)
1678
1679       out
1680
1681        csv (in => $aoa, out => "file.csv");
1682        csv (in => $aoa, out => $fh);
1683        csv (in => $aoa, out =>   STDOUT);
1684        csv (in => $aoa, out =>  *STDOUT);
1685        csv (in => $aoa, out => \*STDOUT);
1686        csv (in => $aoa, out => \my $data);
1687        csv (in => $aoa, out =>  undef);
1688        csv (in => $aoa, out => \"skip");
1689
1690        csv (in => $fh,  out => \@aoa);
1691        csv (in => $fh,  out => \@aoh, bom => 1);
1692        csv (in => $fh,  out => \%hsh, key => "key");
1693
1694       In output mode, the default CSV options when producing CSV are
1695
1696        eol       => "\r\n"
1697
1698       The "fragment" attribute is ignored in output mode.
1699
1700       "out" can be a file name  (e.g.  "file.csv"),  which will be opened for
1701       writing and closed when finished,  a file handle (e.g. $fh or "FH"),  a
1702       reference to a glob (e.g. "\*STDOUT"),  the glob itself (e.g. *STDOUT),
1703       or a reference to a scalar (e.g. "\my $data").
1704
1705        csv (in => sub { $sth->fetch },            out => "dump.csv");
1706        csv (in => sub { $sth->fetchrow_hashref }, out => "dump.csv",
1707             headers => $sth->{NAME_lc});
1708
1709       When a code-ref is used for "in", the output is generated  per
1710       invocation, so no buffering is involved. This implies that there is no
1711       size restriction on the number of records. The "csv" function ends when
1712       the coderef returns a false value.
1713
1714       If "out" is set to a reference of the literal string "skip", the output
1715       will be suppressed completely,  which might be useful in combination
1716       with a filter for side effects only.
1717
1718        my %cache;
1719        csv (in    => "dump.csv",
1720             out   => \"skip",
1721             on_in => sub { $cache{$_[1][1]}++ });
1722
1723       Currently,  setting "out" to any false value  ("undef", "", 0) will be
1724       equivalent to "\"skip"".
1725
1726       If the "in" argument point to something to parse, and the "out" is set
1727       to a reference to an "ARRAY" or a "HASH", the output is appended to the
1728       data in the existing reference. The result of the parse should match
1729       what exists in the reference passed. This might come handy when you
1730       have to parse a set of files with similar content (like data stored per
1731       period) and you want to collect that into a single data structure:
1732
1733        my %hash;
1734        csv (in => $_, out => \%hash, key => "id") for sort glob "foo-[0-9]*.csv";
1735
1736        my @list; # List of arrays
1737        csv (in => $_, out => \@list)              for sort glob "foo-[0-9]*.csv";
1738
1739        my @list; # List of hashes
1740        csv (in => $_, out => \@list, bom => 1)    for sort glob "foo-[0-9]*.csv";
1741
1742       encoding
1743
1744       If passed,  it should be an encoding accepted by the  ":encoding()"
1745       option to "open". There is no default value. This attribute does not
1746       work in perl 5.6.x.  "encoding" can be abbreviated to "enc" for ease of
1747       use in command line invocations.
1748
1749       If "encoding" is set to the literal value "auto", the method "header"
1750       will be invoked on the opened stream to check if there is a BOM and set
1751       the encoding accordingly.   This is equal to passing a true value in
1752       the option "detect_bom".
1753
1754       Encodings can be stacked, as supported by "binmode":
1755
1756        # Using PerlIO::via::gzip
1757        csv (in       => \@csv,
1758             out      => "test.csv:via.gz",
1759             encoding => ":via(gzip):encoding(utf-8)",
1760             );
1761        $aoa = csv (in => "test.csv:via.gz",  encoding => ":via(gzip)");
1762
1763        # Using PerlIO::gzip
1764        csv (in       => \@csv,
1765             out      => "test.csv:via.gz",
1766             encoding => ":gzip:encoding(utf-8)",
1767             );
1768        $aoa = csv (in => "test.csv:gzip.gz", encoding => ":gzip");
1769
1770       detect_bom
1771
1772       If  "detect_bom"  is given, the method  "header"  will be invoked on
1773       the opened stream to check if there is a BOM and set the encoding
1774       accordingly.
1775
1776       "detect_bom" can be abbreviated to "bom".
1777
1778       This is the same as setting "encoding" to "auto".
1779
1780       Note that as the method  "header" is invoked,  its default is to also
1781       set the headers.
1782
1783       headers
1784
1785       If this attribute is not given, the default behavior is to produce an
1786       array of arrays.
1787
1788       If "headers" is supplied,  it should be an anonymous list of column
1789       names, an anonymous hashref, a coderef, or a literal flag:  "auto",
1790       "lc", "uc", or "skip".
1791
1792       skip
1793         When "skip" is used, the header will not be included in the output.
1794
1795          my $aoa = csv (in => $fh, headers => "skip");
1796
1797       auto
1798         If "auto" is used, the first line of the "CSV" source will be read as
1799         the list of field headers and used to produce an array of hashes.
1800
1801          my $aoh = csv (in => $fh, headers => "auto");
1802
1803       lc
1804         If "lc" is used,  the first line of the  "CSV" source will be read as
1805         the list of field headers mapped to  lower case and used to produce
1806         an array of hashes. This is a variation of "auto".
1807
1808          my $aoh = csv (in => $fh, headers => "lc");
1809
1810       uc
1811         If "uc" is used,  the first line of the  "CSV" source will be read as
1812         the list of field headers mapped to  upper case and used to produce
1813         an array of hashes. This is a variation of "auto".
1814
1815          my $aoh = csv (in => $fh, headers => "uc");
1816
1817       CODE
1818         If a coderef is used,  the first line of the  "CSV" source will be
1819         read as the list of mangled field headers in which each field is
1820         passed as the only argument to the coderef. This list is used to
1821         produce an array of hashes.
1822
1823          my $aoh = csv (in      => $fh,
1824                         headers => sub { lc ($_[0]) =~ s/kode/code/gr });
1825
1826         this example is a variation of using "lc" where all occurrences of
1827         "kode" are replaced with "code".
1828
1829       ARRAY
1830         If  "headers"  is an anonymous list,  the entries in the list will be
1831         used as field names. The first line is considered data instead of
1832         headers.
1833
1834          my $aoh = csv (in => $fh, headers => [qw( Foo Bar )]);
1835          csv (in => $aoa, out => $fh, headers => [qw( code description price )]);
1836
1837       HASH
1838         If "headers" is a hash reference, this implies "auto", but header
1839         fields that exist as key in the hashref will be replaced by the value
1840         for that key. Given a CSV file like
1841
1842          post-kode,city,name,id number,fubble
1843          1234AA,Duckstad,Donald,13,"X313DF"
1844
1845         using
1846
1847          csv (headers => { "post-kode" => "pc", "id number" => "ID" }, ...
1848
1849         will return an entry like
1850
1851          { pc     => "1234AA",
1852            city   => "Duckstad",
1853            name   => "Donald",
1854            ID     => "13",
1855            fubble => "X313DF",
1856            }
1857
1858       See also "munge_column_names" and "set_column_names".
1859
1860       munge_column_names
1861
1862       If "munge_column_names" is set,  the method  "header"  is invoked on
1863       the opened stream with all matching arguments to detect and set the
1864       headers.
1865
1866       "munge_column_names" can be abbreviated to "munge".
1867
1868       key
1869
1870       If passed,  will default  "headers"  to "auto" and return a hashref
1871       instead of an array of hashes. Allowed values are simple scalars or
1872       array-references where the first element is the joiner and the rest are
1873       the fields to join to combine the key.
1874
1875        my $ref = csv (in => "test.csv", key => "code");
1876        my $ref = csv (in => "test.csv", key => [ ":" => "code", "color" ]);
1877
1878       with test.csv like
1879
1880        code,product,price,color
1881        1,pc,850,gray
1882        2,keyboard,12,white
1883        3,mouse,5,black
1884
1885       the first example will return
1886
1887         { 1   => {
1888               code    => 1,
1889               color   => 'gray',
1890               price   => 850,
1891               product => 'pc'
1892               },
1893           2   => {
1894               code    => 2,
1895               color   => 'white',
1896               price   => 12,
1897               product => 'keyboard'
1898               },
1899           3   => {
1900               code    => 3,
1901               color   => 'black',
1902               price   => 5,
1903               product => 'mouse'
1904               }
1905           }
1906
1907       the second example will return
1908
1909         { "1:gray"    => {
1910               code    => 1,
1911               color   => 'gray',
1912               price   => 850,
1913               product => 'pc'
1914               },
1915           "2:white"   => {
1916               code    => 2,
1917               color   => 'white',
1918               price   => 12,
1919               product => 'keyboard'
1920               },
1921           "3:black"   => {
1922               code    => 3,
1923               color   => 'black',
1924               price   => 5,
1925               product => 'mouse'
1926               }
1927           }
1928
1929       The "key" attribute can be combined with "headers" for "CSV" date that
1930       has no header line, like
1931
1932        my $ref = csv (
1933            in      => "foo.csv",
1934            headers => [qw( c_foo foo bar description stock )],
1935            key     =>     "c_foo",
1936            );
1937
1938       value
1939
1940       Used to create key-value hashes.
1941
1942       Only allowed when "key" is valid. A "value" can be either a single
1943       column label or an anonymous list of column labels.  In the first case,
1944       the value will be a simple scalar value, in the latter case, it will be
1945       a hashref.
1946
1947        my $ref = csv (in => "test.csv", key   => "code",
1948                                         value => "price");
1949        my $ref = csv (in => "test.csv", key   => "code",
1950                                         value => [ "product", "price" ]);
1951        my $ref = csv (in => "test.csv", key   => [ ":" => "code", "color" ],
1952                                         value => "price");
1953        my $ref = csv (in => "test.csv", key   => [ ":" => "code", "color" ],
1954                                         value => [ "product", "price" ]);
1955
1956       with test.csv like
1957
1958        code,product,price,color
1959        1,pc,850,gray
1960        2,keyboard,12,white
1961        3,mouse,5,black
1962
1963       the first example will return
1964
1965         { 1 => 850,
1966           2 =>  12,
1967           3 =>   5,
1968           }
1969
1970       the second example will return
1971
1972         { 1   => {
1973               price   => 850,
1974               product => 'pc'
1975               },
1976           2   => {
1977               price   => 12,
1978               product => 'keyboard'
1979               },
1980           3   => {
1981               price   => 5,
1982               product => 'mouse'
1983               }
1984           }
1985
1986       the third example will return
1987
1988         { "1:gray"    => 850,
1989           "2:white"   =>  12,
1990           "3:black"   =>   5,
1991           }
1992
1993       the fourth example will return
1994
1995         { "1:gray"    => {
1996               price   => 850,
1997               product => 'pc'
1998               },
1999           "2:white"   => {
2000               price   => 12,
2001               product => 'keyboard'
2002               },
2003           "3:black"   => {
2004               price   => 5,
2005               product => 'mouse'
2006               }
2007           }
2008
2009       keep_headers
2010
2011       When using hashes,  keep the column names into the arrayref passed,  so
2012       all headers are available after the call in the original order.
2013
2014        my $aoh = csv (in => "file.csv", keep_headers => \my @hdr);
2015
2016       This attribute can be abbreviated to "kh" or passed as
2017       "keep_column_names".
2018
2019       This attribute implies a default of "auto" for the "headers" attribute.
2020
2021       fragment
2022
2023       Only output the fragment as defined in the "fragment" method. This
2024       option is ignored when generating "CSV". See "out".
2025
2026       Combining all of them could give something like
2027
2028        use Text::CSV_XS qw( csv );
2029        my $aoh = csv (
2030            in       => "test.txt",
2031            encoding => "utf-8",
2032            headers  => "auto",
2033            sep_char => "|",
2034            fragment => "row=3;6-9;15-*",
2035            );
2036        say $aoh->[15]{Foo};
2037
2038       sep_set
2039
2040       If "sep_set" is set, the method "header" is invoked on the opened
2041       stream to detect and set "sep_char" with the given set.
2042
2043       "sep_set" can be abbreviated to "seps".
2044
2045       Note that as the  "header" method is invoked,  its default is to also
2046       set the headers.
2047
2048       set_column_names
2049
2050       If  "set_column_names" is passed,  the method "header" is invoked on
2051       the opened stream with all arguments meant for "header".
2052
2053       If "set_column_names" is passed as a false value, the content of the
2054       first row is only preserved if the output is AoA:
2055
2056       With an input-file like
2057
2058        bAr,foo
2059        1,2
2060        3,4,5
2061
2062       This call
2063
2064        my $aoa = csv (in => $file, set_column_names => 0);
2065
2066       will result in
2067
2068        [[ "bar", "foo"     ],
2069         [ "1",   "2"       ],
2070         [ "3",   "4",  "5" ]]
2071
2072       and
2073
2074        my $aoa = csv (in => $file, set_column_names => 0, munge => "none");
2075
2076       will result in
2077
2078        [[ "bAr", "foo"     ],
2079         [ "1",   "2"       ],
2080         [ "3",   "4",  "5" ]]
2081
2082   Callbacks
2083       Callbacks enable actions triggered from the inside of Text::CSV_XS.
2084
2085       While most of what this enables  can easily be done in an  unrolled
2086       loop as described in the "SYNOPSIS" callbacks can be used to meet
2087       special demands or enhance the "csv" function.
2088
2089       error
2090          $csv->callbacks (error => sub { $csv->SetDiag (0) });
2091
2092         the "error"  callback is invoked when an error occurs,  but  only
2093         when "auto_diag" is set to a true value. A callback is invoked with
2094         the values returned by "error_diag":
2095
2096          my ($c, $s);
2097
2098          sub ignore3006 {
2099              my ($err, $msg, $pos, $recno, $fldno) = @_;
2100              if ($err == 3006) {
2101                  # ignore this error
2102                  ($c, $s) = (undef, undef);
2103                  Text::CSV_XS->SetDiag (0);
2104                  }
2105              # Any other error
2106              return;
2107              } # ignore3006
2108
2109          $csv->callbacks (error => \&ignore3006);
2110          $csv->bind_columns (\$c, \$s);
2111          while ($csv->getline ($fh)) {
2112              # Error 3006 will not stop the loop
2113              }
2114
2115       after_parse
2116          $csv->callbacks (after_parse => sub { push @{$_[1]}, "NEW" });
2117          while (my $row = $csv->getline ($fh)) {
2118              $row->[-1] eq "NEW";
2119              }
2120
2121         This callback is invoked after parsing with  "getline"  only if no
2122         error occurred.  The callback is invoked with two arguments:   the
2123         current "CSV" parser object and an array reference to the fields
2124         parsed.
2125
2126         The return code of the callback is ignored  unless it is a reference
2127         to the string "skip", in which case the record will be skipped in
2128         "getline_all".
2129
2130          sub add_from_db {
2131              my ($csv, $row) = @_;
2132              $sth->execute ($row->[4]);
2133              push @$row, $sth->fetchrow_array;
2134              } # add_from_db
2135
2136          my $aoa = csv (in => "file.csv", callbacks => {
2137              after_parse => \&add_from_db });
2138
2139         This hook can be used for validation:
2140
2141         FAIL
2142           Die if any of the records does not validate a rule:
2143
2144            after_parse => sub {
2145                $_[1][4] =~ m/^[0-9]{4}\s?[A-Z]{2}$/ or
2146                    die "5th field does not have a valid Dutch zipcode";
2147                }
2148
2149         DEFAULT
2150           Replace invalid fields with a default value:
2151
2152            after_parse => sub { $_[1][2] =~ m/^\d+$/ or $_[1][2] = 0 }
2153
2154         SKIP
2155           Skip records that have invalid fields (only applies to
2156           "getline_all"):
2157
2158            after_parse => sub { $_[1][0] =~ m/^\d+$/ or return \"skip"; }
2159
2160       before_print
2161          my $idx = 1;
2162          $csv->callbacks (before_print => sub { $_[1][0] = $idx++ });
2163          $csv->print (*STDOUT, [ 0, $_ ]) for @members;
2164
2165         This callback is invoked  before printing with  "print"  only if no
2166         error occurred.  The callback is invoked with two arguments:  the
2167         current  "CSV" parser object and an array reference to the fields
2168         passed.
2169
2170         The return code of the callback is ignored.
2171
2172          sub max_4_fields {
2173              my ($csv, $row) = @_;
2174              @$row > 4 and splice @$row, 4;
2175              } # max_4_fields
2176
2177          csv (in => csv (in => "file.csv"), out => *STDOUT,
2178              callbacks => { before_print => \&max_4_fields });
2179
2180         This callback is not active for "combine".
2181
2182       Callbacks for csv ()
2183
2184       The "csv" allows for some callbacks that do not integrate in XS
2185       internals but only feature the "csv" function.
2186
2187         csv (in        => "file.csv",
2188              callbacks => {
2189                  filter       => { 6 => sub { $_ > 15 } },    # first
2190                  after_parse  => sub { say "AFTER PARSE";  }, # first
2191                  after_in     => sub { say "AFTER IN";     }, # second
2192                  on_in        => sub { say "ON IN";        }, # third
2193                  },
2194              );
2195
2196         csv (in        => $aoh,
2197              out       => "file.csv",
2198              callbacks => {
2199                  on_in        => sub { say "ON IN";        }, # first
2200                  before_out   => sub { say "BEFORE OUT";   }, # second
2201                  before_print => sub { say "BEFORE PRINT"; }, # third
2202                  },
2203              );
2204
2205       filter
2206         This callback can be used to filter records.  It is called just after
2207         a new record has been scanned.  The callback accepts a:
2208
2209         hashref
2210           The keys are the index to the row (the field name or field number,
2211           1-based) and the values are subs to return a true or false value.
2212
2213            csv (in => "file.csv", filter => {
2214                       3 => sub { m/a/ },       # third field should contain an "a"
2215                       5 => sub { length > 4 }, # length of the 5th field minimal 5
2216                       });
2217
2218            csv (in => "file.csv", filter => { foo => sub { $_ > 4 }});
2219
2220           If the keys to the filter hash contain any character that is not a
2221           digit it will also implicitly set "headers" to "auto"  unless
2222           "headers"  was already passed as argument.  When headers are
2223           active, returning an array of hashes, the filter is not applicable
2224           to the header itself.
2225
2226           All sub results should match, as in AND.
2227
2228           The context of the callback sets  $_ localized to the field
2229           indicated by the filter. The two arguments are as with all other
2230           callbacks, so the other fields in the current row can be seen:
2231
2232            filter => { 3 => sub { $_ > 100 ? $_[1][1] =~ m/A/ : $_[1][6] =~ m/B/ }}
2233
2234           If the context is set to return a list of hashes  ("headers" is
2235           defined), the current record will also be available in the
2236           localized %_:
2237
2238            filter => { 3 => sub { $_ > 100 && $_{foo} =~ m/A/ && $_{bar} < 1000  }}
2239
2240           If the filter is used to alter the content by changing $_,  make
2241           sure that the sub returns true in order not to have that record
2242           skipped:
2243
2244            filter => { 2 => sub { $_ = uc }}
2245
2246           will upper-case the second field, and then skip it if the resulting
2247           content evaluates to false. To always accept, end with truth:
2248
2249            filter => { 2 => sub { $_ = uc; 1 }}
2250
2251         coderef
2252            csv (in => "file.csv", filter => sub { $n++; 0; });
2253
2254           If the argument to "filter" is a coderef,  it is an alias or
2255           shortcut to a filter on column 0:
2256
2257            csv (filter => sub { $n++; 0 });
2258
2259           is equal to
2260
2261            csv (filter => { 0 => sub { $n++; 0 });
2262
2263         filter-name
2264            csv (in => "file.csv", filter => "not_blank");
2265            csv (in => "file.csv", filter => "not_empty");
2266            csv (in => "file.csv", filter => "filled");
2267
2268           These are predefined filters
2269
2270           Given a file like (line numbers prefixed for doc purpose only):
2271
2272            1:1,2,3
2273            2:
2274            3:,
2275            4:""
2276            5:,,
2277            6:, ,
2278            7:"",
2279            8:" "
2280            9:4,5,6
2281
2282           not_blank
2283             Filter out the blank lines
2284
2285             This filter is a shortcut for
2286
2287              filter => { 0 => sub { @{$_[1]} > 1 or
2288                          defined $_[1][0] && $_[1][0] ne "" } }
2289
2290             Due to the implementation,  it is currently impossible to also
2291             filter lines that consists only of a quoted empty field. These
2292             lines are also considered blank lines.
2293
2294             With the given example, lines 2 and 4 will be skipped.
2295
2296           not_empty
2297             Filter out lines where all the fields are empty.
2298
2299             This filter is a shortcut for
2300
2301              filter => { 0 => sub { grep { defined && $_ ne "" } @{$_[1]} } }
2302
2303             A space is not regarded being empty, so given the example data,
2304             lines 2, 3, 4, 5, and 7 are skipped.
2305
2306           filled
2307             Filter out lines that have no visible data
2308
2309             This filter is a shortcut for
2310
2311              filter => { 0 => sub { grep { defined && m/\S/ } @{$_[1]} } }
2312
2313             This filter rejects all lines that not have at least one field
2314             that does not evaluate to the empty string.
2315
2316             With the given example data, this filter would skip lines 2
2317             through 8.
2318
2319         One could also use modules like Types::Standard:
2320
2321          use Types::Standard -types;
2322
2323          my $type   = Tuple[Str, Str, Int, Bool, Optional[Num]];
2324          my $check  = $type->compiled_check;
2325
2326          # filter with compiled check and warnings
2327          my $aoa = csv (
2328             in     => \$data,
2329             filter => {
2330                 0 => sub {
2331                     my $ok = $check->($_[1]) or
2332                         warn $type->get_message ($_[1]), "\n";
2333                     return $ok;
2334                     },
2335                 },
2336             );
2337
2338       after_in
2339         This callback is invoked for each record after all records have been
2340         parsed but before returning the reference to the caller.  The hook is
2341         invoked with two arguments:  the current  "CSV"  parser object  and a
2342         reference to the record.   The reference can be a reference to a
2343         HASH  or a reference to an ARRAY as determined by the arguments.
2344
2345         This callback can also be passed as  an attribute without the
2346         "callbacks" wrapper.
2347
2348       before_out
2349         This callback is invoked for each record before the record is
2350         printed.  The hook is invoked with two arguments:  the current "CSV"
2351         parser object and a reference to the record.   The reference can be a
2352         reference to a  HASH or a reference to an ARRAY as determined by the
2353         arguments.
2354
2355         This callback can also be passed as an attribute  without the
2356         "callbacks" wrapper.
2357
2358         This callback makes the row available in %_ if the row is a hashref.
2359         In this case %_ is writable and will change the original row.
2360
2361       on_in
2362         This callback acts exactly as the "after_in" or the "before_out"
2363         hooks.
2364
2365         This callback can also be passed as an attribute  without the
2366         "callbacks" wrapper.
2367
2368         This callback makes the row available in %_ if the row is a hashref.
2369         In this case %_ is writable and will change the original row. So e.g.
2370         with
2371
2372           my $aoh = csv (
2373               in      => \"foo\n1\n2\n",
2374               headers => "auto",
2375               on_in   => sub { $_{bar} = 2; },
2376               );
2377
2378         $aoh will be:
2379
2380           [ { foo => 1,
2381               bar => 2,
2382               }
2383             { foo => 2,
2384               bar => 2,
2385               }
2386             ]
2387
2388       csv
2389         The function  "csv" can also be called as a method or with an
2390         existing Text::CSV_XS object. This could help if the function is to
2391         be invoked a lot of times and the overhead of creating the object
2392         internally over  and  over again would be prevented by passing an
2393         existing instance.
2394
2395          my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2396
2397          my $aoa = $csv->csv (in => $fh);
2398          my $aoa = csv (in => $fh, csv => $csv);
2399
2400         both act the same. Running this 20000 times on a 20 lines CSV file,
2401         showed a 53% speedup.
2402

INTERNALS

2404       Combine (...)
2405       Parse (...)
2406
2407       The arguments to these internal functions are deliberately not
2408       described or documented in order to enable the  module authors make
2409       changes it when they feel the need for it.  Using them is  highly
2410       discouraged  as  the  API may change in future releases.
2411

EXAMPLES

2413   Reading a CSV file line by line:
2414        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2415        open my $fh, "<", "file.csv" or die "file.csv: $!";
2416        while (my $row = $csv->getline ($fh)) {
2417            # do something with @$row
2418            }
2419        close $fh or die "file.csv: $!";
2420
2421       or
2422
2423        my $aoa = csv (in => "file.csv", on_in => sub {
2424            # do something with %_
2425            });
2426
2427       Reading only a single column
2428
2429        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2430        open my $fh, "<", "file.csv" or die "file.csv: $!";
2431        # get only the 4th column
2432        my @column = map { $_->[3] } @{$csv->getline_all ($fh)};
2433        close $fh or die "file.csv: $!";
2434
2435       with "csv", you could do
2436
2437        my @column = map { $_->[0] }
2438            @{csv (in => "file.csv", fragment => "col=4")};
2439
2440   Parsing CSV strings:
2441        my $csv = Text::CSV_XS->new ({ keep_meta_info => 1, binary => 1 });
2442
2443        my $sample_input_string =
2444            qq{"I said, ""Hi!""",Yes,"",2.34,,"1.09","\x{20ac}",};
2445        if ($csv->parse ($sample_input_string)) {
2446            my @field = $csv->fields;
2447            foreach my $col (0 .. $#field) {
2448                my $quo = $csv->is_quoted ($col) ? $csv->{quote_char} : "";
2449                printf "%2d: %s%s%s\n", $col, $quo, $field[$col], $quo;
2450                }
2451            }
2452        else {
2453            print STDERR "parse () failed on argument: ",
2454                $csv->error_input, "\n";
2455            $csv->error_diag ();
2456            }
2457
2458       Parsing CSV from memory
2459
2460       Given a complete CSV data-set in scalar $data,  generate a list of
2461       lists to represent the rows and fields
2462
2463        # The data
2464        my $data = join "\r\n" => map { join "," => 0 .. 5 } 0 .. 5;
2465
2466        # in a loop
2467        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1 });
2468        open my $fh, "<", \$data;
2469        my @foo;
2470        while (my $row = $csv->getline ($fh)) {
2471            push @foo, $row;
2472            }
2473        close $fh;
2474
2475        # a single call
2476        my $foo = csv (in => \$data);
2477
2478   Printing CSV data
2479       The fast way: using "print"
2480
2481       An example for creating "CSV" files using the "print" method:
2482
2483        my $csv = Text::CSV_XS->new ({ binary => 1, eol => $/ });
2484        open my $fh, ">", "foo.csv" or die "foo.csv: $!";
2485        for (1 .. 10) {
2486            $csv->print ($fh, [ $_, "$_" ]) or $csv->error_diag;
2487            }
2488        close $fh or die "$tbl.csv: $!";
2489
2490       The slow way: using "combine" and "string"
2491
2492       or using the slower "combine" and "string" methods:
2493
2494        my $csv = Text::CSV_XS->new;
2495
2496        open my $csv_fh, ">", "hello.csv" or die "hello.csv: $!";
2497
2498        my @sample_input_fields = (
2499            'You said, "Hello!"',   5.67,
2500            '"Surely"',   '',   '3.14159');
2501        if ($csv->combine (@sample_input_fields)) {
2502            print $csv_fh $csv->string, "\n";
2503            }
2504        else {
2505            print "combine () failed on argument: ",
2506                $csv->error_input, "\n";
2507            }
2508        close $csv_fh or die "hello.csv: $!";
2509
2510       Generating CSV into memory
2511
2512       Format a data-set (@foo) into a scalar value in memory ($data):
2513
2514        # The data
2515        my @foo = map { [ 0 .. 5 ] } 0 .. 3;
2516
2517        # in a loop
2518        my $csv = Text::CSV_XS->new ({ binary => 1, auto_diag => 1, eol => "\r\n" });
2519        open my $fh, ">", \my $data;
2520        $csv->print ($fh, $_) for @foo;
2521        close $fh;
2522
2523        # a single call
2524        csv (in => \@foo, out => \my $data);
2525
2526   Rewriting CSV
2527       Rewrite "CSV" files with ";" as separator character to well-formed
2528       "CSV":
2529
2530        use Text::CSV_XS qw( csv );
2531        csv (in => csv (in => "bad.csv", sep_char => ";"), out => *STDOUT);
2532
2533       As "STDOUT" is now default in "csv", a one-liner converting a UTF-16
2534       CSV file with BOM and TAB-separation to valid UTF-8 CSV could be:
2535
2536        $ perl -C3 -MText::CSV_XS=csv -we\
2537           'csv(in=>"utf16tab.csv",encoding=>"utf16",sep=>"\t")' >utf8.csv
2538
2539   Dumping database tables to CSV
2540       Dumping a database table can be simple as this (TIMTOWTDI):
2541
2542        my $dbh = DBI->connect (...);
2543        my $sql = "select * from foo";
2544
2545        # using your own loop
2546        open my $fh, ">", "foo.csv" or die "foo.csv: $!\n";
2547        my $csv = Text::CSV_XS->new ({ binary => 1, eol => "\r\n" });
2548        my $sth = $dbh->prepare ($sql); $sth->execute;
2549        $csv->print ($fh, $sth->{NAME_lc});
2550        while (my $row = $sth->fetch) {
2551            $csv->print ($fh, $row);
2552            }
2553
2554        # using the csv function, all in memory
2555        csv (out => "foo.csv", in => $dbh->selectall_arrayref ($sql));
2556
2557        # using the csv function, streaming with callbacks
2558        my $sth = $dbh->prepare ($sql); $sth->execute;
2559        csv (out => "foo.csv", in => sub { $sth->fetch            });
2560        csv (out => "foo.csv", in => sub { $sth->fetchrow_hashref });
2561
2562       Note that this does not discriminate between "empty" values and NULL-
2563       values from the database,  as both will be the same empty field in CSV.
2564       To enable distinction between the two, use "quote_empty".
2565
2566        csv (out => "foo.csv", in => sub { $sth->fetch }, quote_empty => 1);
2567
2568       If the database import utility supports special sequences to insert
2569       "NULL" values into the database,  like MySQL/MariaDB supports "\N",
2570       use a filter or a map
2571
2572        csv (out => "foo.csv", in => sub { $sth->fetch },
2573                            on_in => sub { $_ //= "\\N" for @{$_[1]} });
2574
2575        while (my $row = $sth->fetch) {
2576            $csv->print ($fh, [ map { $_ // "\\N" } @$row ]);
2577            }
2578
2579       Note that this will not work as expected when choosing the backslash
2580       ("\") as "escape_char", as that will cause the "\" to need to be
2581       escaped by yet another "\",  which will cause the field to need
2582       quotation and thus ending up as "\\N" instead of "\N". See also
2583       "undef_str".
2584
2585        csv (out => "foo.csv", in => sub { $sth->fetch }, undef_str => "\\N");
2586
2587       These special sequences are not recognized by  Text::CSV_XS  on parsing
2588       the CSV generated like this, but map and filter are your friends again
2589
2590        while (my $row = $csv->getline ($fh)) {
2591            $sth->execute (map { $_ eq "\\N" ? undef : $_ } @$row);
2592            }
2593
2594        csv (in => "foo.csv", filter => { 1 => sub {
2595            $sth->execute (map { $_ eq "\\N" ? undef : $_ } @{$_[1]}); 0; }});
2596
2597   Converting CSV to JSON
2598        use Text::CSV_XS qw( csv );
2599        use JSON; # or Cpanel::JSON::XS for better performance
2600
2601        # AoA (no header interpretation)
2602        say encode_json (csv (in => "file.csv"));
2603
2604        # AoH (convert to structures)
2605        say encode_json (csv (in => "file.csv", bom => 1));
2606
2607       Yes, it is that simple.
2608
2609   The examples folder
2610       For more extended examples, see the examples/ 1. sub-directory in the
2611       original distribution or the git repository 2.
2612
2613        1. https://github.com/Tux/Text-CSV_XS/tree/master/examples
2614        2. https://github.com/Tux/Text-CSV_XS
2615
2616       The following files can be found there:
2617
2618       parser-xs.pl
2619         This can be used as a boilerplate to parse invalid "CSV"  and parse
2620         beyond (expected) errors alternative to using the "error" callback.
2621
2622          $ perl examples/parser-xs.pl bad.csv >good.csv
2623
2624       csv-check
2625         This is a command-line tool that uses parser-xs.pl  techniques to
2626         check the "CSV" file and report on its content.
2627
2628          $ csv-check files/utf8.csv
2629          Checked files/utf8.csv  with csv-check 1.9
2630          using Text::CSV_XS 1.32 with perl 5.26.0 and Unicode 9.0.0
2631          OK: rows: 1, columns: 2
2632              sep = <,>, quo = <">, bin = <1>, eol = <"\n">
2633
2634       csv-split
2635         This command splits "CSV" files into smaller files,  keeping (part
2636         of) the header.  Options include maximum number of (data) rows per
2637         file and maximum number of columns per file or a combination of the
2638         two.
2639
2640       csv2xls
2641         A script to convert "CSV" to Microsoft Excel ("XLS"). This requires
2642         extra modules Date::Calc and Spreadsheet::WriteExcel. The converter
2643         accepts various options and can produce UTF-8 compliant Excel files.
2644
2645       csv2xlsx
2646         A script to convert "CSV" to Microsoft Excel ("XLSX").  This requires
2647         the modules Date::Calc and Spreadsheet::Writer::XLSX.  The converter
2648         does accept various options including merging several "CSV" files
2649         into a single Excel file.
2650
2651       csvdiff
2652         A script that provides colorized diff on sorted CSV files,  assuming
2653         first line is header and first field is the key. Output options
2654         include colorized ANSI escape codes or HTML.
2655
2656          $ csvdiff --html --output=diff.html file1.csv file2.csv
2657
2658       rewrite.pl
2659         A script to rewrite (in)valid CSV into valid CSV files.  Script has
2660         options to generate confusing CSV files or CSV files that conform to
2661         Dutch MS-Excel exports (using ";" as separation).
2662
2663         Script - by default - honors BOM  and auto-detects separation
2664         converting it to default standard CSV with "," as separator.
2665

CAVEATS

2667       Text::CSV_XS  is not designed to detect the characters used to quote
2668       and separate fields.  The parsing is done using predefined  (default)
2669       settings.  In the examples  sub-directory,  you can find scripts  that
2670       demonstrate how you could try to detect these characters yourself.
2671
2672   Microsoft Excel
2673       The import/export from Microsoft Excel is a risky task, according to
2674       the documentation in "Text::CSV::Separator".  Microsoft uses the
2675       system's list separator defined in the regional settings, which happens
2676       to be a semicolon for Dutch, German and Spanish (and probably some
2677       others as well).   For the English locale,  the default is a comma.
2678       In Windows however,  the user is free to choose a  predefined locale,
2679       and then change  every  individual setting in it, so checking the
2680       locale is no solution.
2681
2682       As of version 1.17, a lone first line with just
2683
2684         sep=;
2685
2686       will be recognized and honored when parsing with "getline".
2687

TODO

2689       More Errors & Warnings
2690         New extensions ought to be  clear and concise  in reporting what
2691         error has occurred where and why, and maybe also offer a remedy to
2692         the problem.
2693
2694         "error_diag" is a (very) good start, but there is more work to be
2695         done in this area.
2696
2697         Basic calls  should croak or warn on  illegal parameters.  Errors
2698         should be documented.
2699
2700       setting meta info
2701         Future extensions might include extending the "meta_info",
2702         "is_quoted", and  "is_binary"  to accept setting these  flags for
2703         fields,  so you can specify which fields are quoted in the
2704         "combine"/"string" combination.
2705
2706          $csv->meta_info (0, 1, 1, 3, 0, 0);
2707          $csv->is_quoted (3, 1);
2708
2709         Metadata Vocabulary for Tabular Data
2710         <http://w3c.github.io/csvw/metadata/> (a W3C editor's draft) could be
2711         an example for supporting more metadata.
2712
2713       Parse the whole file at once
2714         Implement new methods or functions  that enable parsing of a
2715         complete file at once, returning a list of hashes. Possible extension
2716         to this could be to enable a column selection on the call:
2717
2718          my @AoH = $csv->parse_file ($filename, { cols => [ 1, 4..8, 12 ]});
2719
2720         returning something like
2721
2722          [ { fields => [ 1, 2, "foo", 4.5, undef, "", 8 ],
2723              flags  => [ ... ],
2724              },
2725            { fields => [ ... ],
2726              .
2727              },
2728            ]
2729
2730         Note that the "csv" function already supports most of this,  but does
2731         not return flags. "getline_all" returns all rows for an open stream,
2732         but this will not return flags either.  "fragment"  can reduce the
2733         required  rows or columns, but cannot combine them.
2734
2735       Cookbook
2736         Write a document that has recipes for  most known  non-standard  (and
2737         maybe some standard)  "CSV" formats,  including formats that use
2738         "TAB",  ";", "|", or other non-comma separators.
2739
2740         Examples could be taken from W3C's CSV on the Web: Use Cases and
2741         Requirements <http://w3c.github.io/csvw/use-cases-and-
2742         requirements/index.html>
2743
2744       Steal
2745         Steal good new ideas and features from PapaParse
2746         <http://papaparse.com> or csvkit <http://csvkit.readthedocs.org>.
2747
2748       Raku support
2749         Raku support can be found here <https://github.com/Tux/CSV>. The
2750         interface is richer in support than the Perl5 API, as Raku supports
2751         more types.
2752
2753         The Raku version does not (yet) support pure binary CSV datasets.
2754
2755   NOT TODO
2756       combined methods
2757         Requests for adding means (methods) that combine "combine" and
2758         "string" in a single call will not be honored (use "print" instead).
2759         Likewise for "parse" and "fields"  (use "getline" instead), given the
2760         problems with embedded newlines.
2761
2762   Release plan
2763       No guarantees, but this is what I had in mind some time ago:
2764
2765       • DIAGNOSTICS section in pod to *describe* the errors (see below)
2766

EBCDIC

2768       Everything should now work on native EBCDIC systems.   As the test does
2769       not cover all possible codepoints and Encode does not support
2770       "utf-ebcdic", there is no guarantee that all handling of Unicode is
2771       done correct.
2772
2773       Opening "EBCDIC" encoded files on  "ASCII"+  systems is likely to
2774       succeed using Encode's "cp37", "cp1047", or "posix-bc":
2775
2776        open my $fh, "<:encoding(cp1047)", "ebcdic_file.csv" or die "...";
2777

DIAGNOSTICS

2779       Still under construction ...
2780
2781       If an error occurs,  "$csv->error_diag" can be used to get information
2782       on the cause of the failure. Note that for speed reasons the internal
2783       value is never cleared on success,  so using the value returned by
2784       "error_diag" in normal cases - when no error occurred - may cause
2785       unexpected results.
2786
2787       If the constructor failed, the cause can be found using "error_diag" as
2788       a class method, like "Text::CSV_XS->error_diag".
2789
2790       The "$csv->error_diag" method is automatically invoked upon error when
2791       the contractor was called with  "auto_diag"  set to  1 or 2, or when
2792       autodie is in effect.  When set to 1, this will cause a "warn" with the
2793       error message,  when set to 2, it will "die". "2012 - EOF" is excluded
2794       from "auto_diag" reports.
2795
2796       Errors can be (individually) caught using the "error" callback.
2797
2798       The errors as described below are available. I have tried to make the
2799       error itself explanatory enough, but more descriptions will be added.
2800       For most of these errors, the first three capitals describe the error
2801       category:
2802
2803       • INI
2804
2805         Initialization error or option conflict.
2806
2807       • ECR
2808
2809         Carriage-Return related parse error.
2810
2811       • EOF
2812
2813         End-Of-File related parse error.
2814
2815       • EIQ
2816
2817         Parse error inside quotation.
2818
2819       • EIF
2820
2821         Parse error inside field.
2822
2823       • ECB
2824
2825         Combine error.
2826
2827       • EHR
2828
2829         HashRef parse related error.
2830
2831       And below should be the complete list of error codes that can be
2832       returned:
2833
2834       • 1001 "INI - sep_char is equal to quote_char or escape_char"
2835
2836         The  separation character  cannot be equal to  the quotation
2837         character or to the escape character,  as this would invalidate all
2838         parsing rules.
2839
2840       • 1002 "INI - allow_whitespace with escape_char or quote_char SP or
2841         TAB"
2842
2843         Using the  "allow_whitespace"  attribute  when either "quote_char" or
2844         "escape_char"  is equal to "SPACE" or "TAB" is too ambiguous to
2845         allow.
2846
2847       • 1003 "INI - \r or \n in main attr not allowed"
2848
2849         Using default "eol" characters in either "sep_char", "quote_char",
2850         or  "escape_char"  is  not allowed.
2851
2852       • 1004 "INI - callbacks should be undef or a hashref"
2853
2854         The "callbacks"  attribute only allows one to be "undef" or a hash
2855         reference.
2856
2857       • 1005 "INI - EOL too long"
2858
2859         The value passed for EOL is exceeding its maximum length (16).
2860
2861       • 1006 "INI - SEP too long"
2862
2863         The value passed for SEP is exceeding its maximum length (16).
2864
2865       • 1007 "INI - QUOTE too long"
2866
2867         The value passed for QUOTE is exceeding its maximum length (16).
2868
2869       • 1008 "INI - SEP undefined"
2870
2871         The value passed for SEP should be defined and not empty.
2872
2873       • 1010 "INI - the header is empty"
2874
2875         The header line parsed in the "header" is empty.
2876
2877       • 1011 "INI - the header contains more than one valid separator"
2878
2879         The header line parsed in the  "header"  contains more than one
2880         (unique) separator character out of the allowed set of separators.
2881
2882       • 1012 "INI - the header contains an empty field"
2883
2884         The header line parsed in the "header" contains an empty field.
2885
2886       • 1013 "INI - the header contains nun-unique fields"
2887
2888         The header line parsed in the  "header"  contains at least  two
2889         identical fields.
2890
2891       • 1014 "INI - header called on undefined stream"
2892
2893         The header line cannot be parsed from an undefined source.
2894
2895       • 1500 "PRM - Invalid/unsupported argument(s)"
2896
2897         Function or method called with invalid argument(s) or parameter(s).
2898
2899       • 1501 "PRM - The key attribute is passed as an unsupported type"
2900
2901         The "key" attribute is of an unsupported type.
2902
2903       • 1502 "PRM - The value attribute is passed without the key attribute"
2904
2905         The "value" attribute is only allowed when a valid key is given.
2906
2907       • 1503 "PRM - The value attribute is passed as an unsupported type"
2908
2909         The "value" attribute is of an unsupported type.
2910
2911       • 2010 "ECR - QUO char inside quotes followed by CR not part of EOL"
2912
2913         When  "eol"  has  been  set  to  anything  but the  default,  like
2914         "\r\t\n",  and  the  "\r"  is  following  the   second   (closing)
2915         "quote_char", where the characters following the "\r" do not make up
2916         the "eol" sequence, this is an error.
2917
2918       • 2011 "ECR - Characters after end of quoted field"
2919
2920         Sequences like "1,foo,"bar"baz,22,1" are not allowed. "bar" is a
2921         quoted field and after the closing double-quote, there should be
2922         either a new-line sequence or a separation character.
2923
2924       • 2012 "EOF - End of data in parsing input stream"
2925
2926         Self-explaining. End-of-file while inside parsing a stream. Can
2927         happen only when reading from streams with "getline",  as using
2928         "parse" is done on strings that are not required to have a trailing
2929         "eol".
2930
2931       • 2013 "INI - Specification error for fragments RFC7111"
2932
2933         Invalid specification for URI "fragment" specification.
2934
2935       • 2014 "ENF - Inconsistent number of fields"
2936
2937         Inconsistent number of fields under strict parsing.
2938
2939       • 2021 "EIQ - NL char inside quotes, binary off"
2940
2941         Sequences like "1,"foo\nbar",22,1" are allowed only when the binary
2942         option has been selected with the constructor.
2943
2944       • 2022 "EIQ - CR char inside quotes, binary off"
2945
2946         Sequences like "1,"foo\rbar",22,1" are allowed only when the binary
2947         option has been selected with the constructor.
2948
2949       • 2023 "EIQ - QUO character not allowed"
2950
2951         Sequences like ""foo "bar" baz",qu" and "2023,",2008-04-05,"Foo,
2952         Bar",\n" will cause this error.
2953
2954       • 2024 "EIQ - EOF cannot be escaped, not even inside quotes"
2955
2956         The escape character is not allowed as last character in an input
2957         stream.
2958
2959       • 2025 "EIQ - Loose unescaped escape"
2960
2961         An escape character should escape only characters that need escaping.
2962
2963         Allowing  the escape  for other characters  is possible  with the
2964         attribute "allow_loose_escapes".
2965
2966       • 2026 "EIQ - Binary character inside quoted field, binary off"
2967
2968         Binary characters are not allowed by default.    Exceptions are
2969         fields that contain valid UTF-8,  that will automatically be upgraded
2970         if the content is valid UTF-8. Set "binary" to 1 to accept binary
2971         data.
2972
2973       • 2027 "EIQ - Quoted field not terminated"
2974
2975         When parsing a field that started with a quotation character,  the
2976         field is expected to be closed with a quotation character.   When the
2977         parsed line is exhausted before the quote is found, that field is not
2978         terminated.
2979
2980       • 2030 "EIF - NL char inside unquoted verbatim, binary off"
2981
2982       • 2031 "EIF - CR char is first char of field, not part of EOL"
2983
2984       • 2032 "EIF - CR char inside unquoted, not part of EOL"
2985
2986       • 2034 "EIF - Loose unescaped quote"
2987
2988       • 2035 "EIF - Escaped EOF in unquoted field"
2989
2990       • 2036 "EIF - ESC error"
2991
2992       • 2037 "EIF - Binary character in unquoted field, binary off"
2993
2994       • 2110 "ECB - Binary character in Combine, binary off"
2995
2996       • 2200 "EIO - print to IO failed. See errno"
2997
2998       • 3001 "EHR - Unsupported syntax for column_names ()"
2999
3000       • 3002 "EHR - getline_hr () called before column_names ()"
3001
3002       • 3003 "EHR - bind_columns () and column_names () fields count
3003         mismatch"
3004
3005       • 3004 "EHR - bind_columns () only accepts refs to scalars"
3006
3007       • 3006 "EHR - bind_columns () did not pass enough refs for parsed
3008         fields"
3009
3010       • 3007 "EHR - bind_columns needs refs to writable scalars"
3011
3012       • 3008 "EHR - unexpected error in bound fields"
3013
3014       • 3009 "EHR - print_hr () called before column_names ()"
3015
3016       • 3010 "EHR - print_hr () called with invalid arguments"
3017

SEE ALSO

3019       IO::File,  IO::Handle,  IO::Wrap,  Text::CSV,  Text::CSV_PP,
3020       Text::CSV::Encoded,     Text::CSV::Separator,    Text::CSV::Slurp,
3021       Spreadsheet::CSV and Spreadsheet::Read, and of course perl.
3022
3023       If you are using Raku,  have a look at "Text::CSV" in the Raku
3024       ecosystem, offering the same features.
3025
3026       non-perl
3027
3028       A CSV parser in JavaScript,  also used by W3C <http://www.w3.org>,  is
3029       the multi-threaded in-browser PapaParse <http://papaparse.com/>.
3030
3031       csvkit <http://csvkit.readthedocs.org> is a python CSV parsing toolkit.
3032

AUTHOR

3034       Alan Citterman <alan@mfgrtl.com> wrote the original Perl module.
3035       Please don't send mail concerning Text::CSV_XS to Alan, who is not
3036       involved in the C/XS part that is now the main part of the module.
3037
3038       Jochen Wiedmann <joe@ispsoft.de> rewrote the en- and decoding in C by
3039       implementing a simple finite-state machine.   He added variable quote,
3040       escape and separator characters, the binary mode and the print and
3041       getline methods. See ChangeLog releases 0.10 through 0.23.
3042
3043       H.Merijn Brand <h.m.brand@xs4all.nl> cleaned up the code,  added the
3044       field flags methods,  wrote the major part of the test suite, completed
3045       the documentation,   fixed most RT bugs,  added all the allow flags and
3046       the "csv" function. See ChangeLog releases 0.25 and on.
3047
3049        Copyright (C) 2007-2021 H.Merijn Brand.  All rights reserved.
3050        Copyright (C) 1998-2001 Jochen Wiedmann. All rights reserved.
3051        Copyright (C) 1997      Alan Citterman.  All rights reserved.
3052
3053       This library is free software;  you can redistribute and/or modify it
3054       under the same terms as Perl itself.
3055
3056
3057
3058perl v5.34.0                      2022-01-21                         CSV_XS(3)
Impressum