1WWW::Mechanize::FAQ(3)User Contributed Perl DocumentationWWW::Mechanize::FAQ(3)
2
3
4
6 WWW::Mechanize::FAQ - Frequently Asked Questions about WWW::Mechanize
7
9 If your question isn't answered here in the FAQ, please turn to the
10 communities at:
11
12 · <http://perlmonks.org>
13
14 · The libwww-perl mailing list at <http://lists.perl.org>
15
17 I have this web page that has JavaScript on it, and my Mech program doesn't
18 work.
19 That's because WWW::Mechanize doesn't operate on the JavaScript. It
20 only understands the HTML parts of the page.
21
22 I thought Mech was supposed to work like a web browser.
23 It does pretty much, but it doesn't support JavaScript.
24
25 I added some basic attempts at picking up URLs in "window.open()" calls
26 and return them in "$mech->links". They work sometimes. Beyond that,
27 there's no support for JavaScript.
28
29 Are you going to add JavaScript support?
30 I will if anyone sends me the code to do it. I'm not going to write a
31 JavaScript processor myself.
32
33 Wouldn't that be a great thing to have in WWW::Mechanize?
34 Yes.
35
36 Would it be hard to do?
37 Hard enough that I don't want to deal with it myself. Plus, I don't
38 use JavaScript myself, so I don't have an itch to scratch.
39
40 Is anyone working on it?
41 I've heard noises from people every so often over the past couple of
42 years, but nothing you'd pin your hopes on.
43
44 It would really help me with a project I'm working on.
45 I'm sure it would.
46
47 Do you know when it might get added?
48 I have no idea if or when such a thing will ever get done. I can
49 guarantee that as soon as there's anything close to JavaScript support
50 I will let everyone know.
51
52 Maybe I'll ask around and see if anyone else knows of a solution.
53 If you must, but I doubt that anyone's written JavaScript support for
54 Mechanize and neglected to tell me about it.
55
56 So what can I do?
57 Since Javascript is completely visible to the client, it cannot be used
58 to prevent a scraper from following links. But it can make life
59 difficult, and until someone writes a Javascript interpreter for Perl
60 or a Mechanize clone to control Firefox, there will be no general
61 solution. But if you want to scrape specific pages, then a solution is
62 always possible.
63
64 One typical use of Javascript is to perform argument checking before
65 posting to the server. The URL you want is probably just buried in the
66 Javascript function. Do a regular expression match on
67 "$mech->content()" to find the link that you want and "$mech->get" it
68 directly (this assumes that you know what you are looking for in
69 advance).
70
71 In more difficult cases, the Javascript is used for URL mangling to
72 satisfy the needs of some middleware. In this case you need to figure
73 out what the Javascript is doing (why are these URLs always really
74 long?). There is probably some function with one or more arguments
75 which calculates the new URL. Step one: using your favorite browser,
76 get the before and after URLs and save them to files. Edit each file,
77 converting the the argument separators ('?', '&' or ';') into newlines.
78 Now it is easy to use diff or comm to find out what Javascript did to
79 the URL. Step 2 - find the function call which created the URL - you
80 will need to parse and interpret its argument list. Using the
81 Javascript Debugger Extension for Firefox may help with the analysis.
82 At this point, it is fairly trivial to write your own function which
83 emulates the Javascript for the pages you want to process.
84
85 Here's annother approach that answers the question, "It works in
86 Firefox, but why not Mech?" Everything the web server knows about the
87 client is present in the HTTP request. If two requests are identical,
88 the results should be identical. So the real question is "What is
89 different between the mech request and the Firefox request?"
90
91 The Firefox extension "Tamper Data" is an effective tool for examining
92 the headers of the requests to the server. Compare that with what LWP
93 is sending. Once the two are identical, the action of the server should
94 be the same as well.
95
96 I say "should", because this is an oversimplification - some values are
97 naturally unique, e.g. a SessionID, but if a SessionID is present, that
98 is probably sufficient, even though the value will be different between
99 the LWP request and the Firefox request. The server could use the
100 session to store information which is troublesome, but that's not the
101 first place to look (and highly unlikely to be relevant when you are
102 requesting the login page of your site).
103
104 Generally the problem is to be found in missing or incorrect POSTDATA
105 arguments, Cookies, User-Agents, Accepts, etc. If you are using mech,
106 then redirects and cookies should not be a problem, but are listed here
107 for completeness. If you are missing headers, "$mech->add_header" can
108 be used to add the headers that you need.
109
111 Can I do [such-and-such] with WWW::Mechanize?
112 If it's possible with LWP::UserAgent, then yes. WWW::Mechanize is a
113 subclass of LWP::UserAgent, so all the wondrous magic of that class is
114 inherited.
115
116 How do I use WWW::Mechanize through a proxy server?
117 See the docs in LWP::UserAgent on how to use the proxy. Short version:
118
119 $mech->proxy(['http', 'ftp'], 'http://proxy.example.com:8000/');
120
121 or get the specs from the environment:
122
123 $mech->env_proxy();
124
125 # Environment set like so:
126 gopher_proxy=http://proxy.my.place/
127 wais_proxy=http://proxy.my.place/
128 no_proxy="localhost,my.domain"
129 export gopher_proxy wais_proxy no_proxy
130
131 How can I see what fields are on the forms?
132 Use the mech-dump utility, optionaly installed with Mechanize.
133
134 $ mech-dump --forms http://search.cpan.org
135 Dumping forms
136 GET http://search.cpan.org/search
137 query=
138 mode=all (option) [*all|module|dist|author]
139 <NONAME>=CPAN Search (submit)
140
141 How do I get Mech to handle authentication?
142 use MIME::Base64;
143
144 my $agent = WWW::Mechanize->new();
145 my @args = (
146 Authorization => "Basic " .
147 MIME::Base64::encode( USER . ':' . PASS )
148 );
149
150 $agent->credentials( ADDRESS, REALM, USER, PASS );
151 $agent->get( URL, @args );
152
153 If you want to use the credentials for all future requests, you can
154 also use the LWP::UserAgent "default_header()" method instead of the
155 extra arguments to "get()"
156
157 $mech->default_header(
158 Authorization => 'Basic ' . encode_base64( USER . ':' . PASSWORD ) );
159
160 How can I get WWW::Mechanize to execute this JavaScript?
161 You can't. JavaScript is entirely client-based, and WWW::Mechanize is
162 a client that doesn't understand JavaScript. See the top part of this
163 FAQ.
164
165 How do I check a checkbox that doesn't have a value defined?
166 Set it to to the value of "on".
167
168 $mech->field( my_checkbox => 'on' );
169
170 How do I handle frames?
171 You don't deal with them as frames, per se, but as links. Extract them
172 with
173
174 my @frame_links = $mech->find_link( tag => "frame" );
175
176 How do I get a list of HTTP headers and their values?
177 All HTTP::Headers methods work on a HTTP::Response object which is
178 returned by the get(), reload(), response()/res(), click(),
179 submit_form(), and request() methods.
180
181 my $mech = WWW::Mechanize->new( autocheck => 1 );
182 $mech->get( 'http://my.site.com' );
183 my $res = $mech->response();
184 for my $key ( $response->header_field_names() ) {
185 print $key, " : ", $response->header( $key ), "\n";
186 }
187
188 How do I enable keep-alive?
189 Since WWW::Mechanize is a subclass of LWP::UserAgent, you can use the
190 same mechanism to enable keep-alive:
191
192 use LWP::ConnCache;
193 ...
194 $mech->conn_cache(LWP::ConnCache->new);
195
196 How can I change/specify the action parameter of an HTML form?
197 You can access the action of the form by utilizing the HTML::Form
198 object returned from one of the specifying form methods.
199
200 Using "$mech->form_number($number)":
201
202 my $mech = WWW::mechanize->new;
203 $mech->get('http://someurlhere.com');
204 # Access the form using its Zero-Based Index by DOM order
205 $mech->form_number(0)->action('http://newAction'); #ABS URL
206
207 Using "$mech->form_name($number)":
208
209 my $mech = WWW::mechanize->new;
210 $mech->get('http://someurlhere.com');
211 #Access the form using its Zero-Based Index by DOM order
212 $mech->form_name('trgForm')->action('http://newAction'); #ABS URL
213
214 How do I save an image? How do I save a large tarball?
215 An image is just content. You get the image and save it.
216
217 $mech->get( 'photo.jpg' );
218 $mech->save_content( '/path/to/my/directory/photo.jpg' );
219
220 You can also save any content directly to disk using the
221 ":content_file" flag to "get()", which is part of LWP::UserAgent.
222
223 $mech->get( 'http://www.cpan.org/src/stable.tar.gz',
224 ':content_file' => 'stable.tar.gz' );
225
226 How do I pick a specific value from a "<select>" list?
227 Find the "HTML::Form::ListInput" in the page.
228
229 my ($listbox) = $mech->find_all_inputs( name => 'listbox' );
230
231 Then create a hash for the lookup:
232
233 my %name_lookup;
234 @name_lookup{ $listbox->value_names } = $listbox->possible_values;
235 my $value = $name_lookup{ 'Name I want' };
236
237 If you have duplicate names, this method won't work, and you'll have to
238 loop over "$listbox->value_names" and "$listbox->possible_values" in
239 parallel until you find a matching name.
240
241 How do I get Mech to not follow redirects?
242 You use functionality in LWP::UserAgent, not Mech itself.
243
244 $mech->requests_redirectable( [] );
245
246 Or you can set "max_redirect":
247
248 $mech->max_redirect( 0 );
249
250 Both these options can also be set in the constructor. Mech doesn't
251 understand them, so will pass them through to the LWP::UserAgent
252 constructor.
253
255 My Mech program doesn't work, but it works in the browser.
256 Mechanize acts like a browser, but apparently something you're doing is
257 not matching the browser's behavior. Maybe it's expecting a certain
258 web client, or maybe you've not handling a field properly. For some
259 reason, your Mech problem isn't doing exactly what the browser is
260 doing, and when you find that, you'll have the answer.
261
262 My Mech program gets these 500 errors.
263 A 500 error from the web server says that the program on the server
264 side died. Probably the web server program was expecting certain
265 inputs that you didn't supply, and instead of handling it nicely, the
266 program died.
267
268 Whatever the cause of the 500 error, if it works in the browser, but
269 not in your Mech program, you're not acting like the browser. See the
270 previous question.
271
272 Why doesn't my program handle this form correctly?
273 Run mech-dump on your page and see what it says.
274
275 mech-dump is a marvelous diagnostic tool for figuring out what forms
276 and fields are on the page. Say you're scraping CNN.com, you'd get
277 this:
278
279 $ mech-dump http://www.cnn.com/
280 GET http://search.cnn.com/cnn/search
281 source=cnn (hidden readonly)
282 invocationType=search/top (hidden readonly)
283 sites=web (radio) [*web/The Web ??|cnn/CNN.com ??]
284 query= (text)
285 <NONAME>=Search (submit)
286
287 POST http://cgi.money.cnn.com/servlets/quote_redirect
288 query= (text)
289 <NONAME>=GET (submit)
290
291 POST http://polls.cnn.com/poll
292 poll_id=2112 (hidden readonly)
293 question_1=<UNDEF> (radio) [1/Simplistic option|2/VIEW RESULTS]
294 <NONAME>=VOTE (submit)
295
296 GET http://search.cnn.com/cnn/search
297 source=cnn (hidden readonly)
298 invocationType=search/bottom (hidden readonly)
299 sites=web (radio) [*web/??CNN.com|cnn/??]
300 query= (text)
301 <NONAME>=Search (submit)
302
303 Four forms, including the first one duplicated at the end. All the
304 fields, all their defaults, lovingly generated by HTML::Form's "dump"
305 method.
306
307 If you want to run mech-dump on something that doesn't lend itself to a
308 quick URL fetch, then use the "save_content()" method to write the HTML
309 to a file, and run mech-dump on the file.
310
311 Why don't https:// URLs work?
312 You need either IO::Socket::SSL or Crypt::SSLeay installed.
313
314 Why do I get "Input 'fieldname' is readonly"?
315 You're trying to change the value of a hidden field and you have
316 warnings on.
317
318 First, make sure that you actually mean to change the field that you're
319 changing, and that you don't have a typo. Usually, hidden variables
320 are set by the site you're working on for a reason. If you change the
321 value, you might be breaking some functionality by faking it out.
322
323 If you really do want to change a hidden value, make the changes in a
324 scope that has warnings turned off:
325
326 {
327 local $^W = 0;
328 $agent->field( name => $value );
329 }
330
331 I tried to [such-and-such] and I got this weird error.
332 Are you checking your errors?
333
334 Are you sure?
335
336 Are you checking that your action succeeded after every action?
337
338 Are you sure?
339
340 For example, if you try this:
341
342 $mech->get( "http://my.site.com" );
343 $mech->follow_link( "foo" );
344
345 and the "get" call fails for some reason, then the Mech internals will
346 be unusable for the "follow_link" and you'll get a weird error. You
347 must, after every action that GETs or POSTs a page, check that Mech
348 succeeded, or all bets are off.
349
350 $mech->get( "http://my.site.com" );
351 die "Can't even get the home page: ", $mech->response->status_line
352 unless $mech->success;
353
354 $mech->follow_link( "foo" );
355 die "Foo link failed: ", $mech->response->status_line
356 unless $mech->success;
357
358 How do I figure out why "$mech->get($url)" doesn't work?
359 There are many reasons why a "get()" can fail. The server can take you
360 to someplace you didn't expect. It can generate redirects which are not
361 properly handled. You can get time-outs. Servers are down more often
362 than you think! etc, etc, etc. A couple of places to start:
363
364 1 Check "$mech->status()" after each call
365 2 Check the URL with "$mech->uri()" to see where you ended up
366 3 Try debugging with "LWP::Debug".
367
368 If things are really strange, turn on debugging with "use LWP::Debug
369 qw(+);" Just put this in the main program. This causes LWP to print out
370 a trace of the HTTP traffic between client and server and can be used
371 to figure out what is happening at the protocol level.
372
373 It is also useful to set many traps to verify that processing is
374 proceeding as expected. A Mech program should always have an "I didn't
375 expect to get here" or "I don't recognize the page that I am
376 processing" case and bail out.
377
378 Since errors can be transient, by the time you notice that the error
379 has occurred, it might not be possible to reproduce it manually. So for
380 automated processing it is useful to email yourself the following
381 information:
382
383 · where processing is taking place
384
385 · An Error Message
386
387 · $mech->uri
388
389 · $mech->content
390
391 You can also save the content of the page with "$mech->save_content(
392 'filename.html' );"
393
394 I submitted a form, but the server ignored everything! I got an empty form
395 back!
396 The post is handled by application software. It is common for PHP
397 programmers to use the same file both to display a form and to process
398 the arguments returned. So the first task of the application programmer
399 is to decide whether there are arguments to processes. The program can
400 check whether a particular parameter has been set, whether a hidden
401 parameter has been set, or whether the submit button has been clicked.
402 (There are probably other ways that I haven't thought of).
403
404 In any case, if your form is not setting the parameter (e.g. the submit
405 button) which the web application is keying on (and as an outsider
406 there is no way to know what it is keying on), it will not notice that
407 the form has been submitted. Try using "$mech->click()" instead of
408 "$mech->submit()" or vice-versa.
409
410 I've logged in to the server, but I get 500 errors when I try to get to
411 protected content.
412 Some web sites use distributed databases for their processing. It can
413 take a few seconds for the login/session information to percolate
414 through to all the servers. For human users with their slow reaction
415 times, this is not a problem, but a Perl script can outrun the server.
416 So try adding a sleep(5) between logging in and actually doing anything
417 (the optimal delay must be determined experimentally).
418
419 Mech is a big memory pig! I'm running out of RAM!
420 Mech keeps a history of every page, and the state it was in. It
421 actually keeps a clone of the full Mech object at every step along the
422 way.
423
424 You can limit this stack size with the "stack_depth" parm in the
425 "new()" constructor. If you set stack_size to 0, Mech will not keep
426 any history.
427
429 Copyright 2005-2009 Andy Lester "<andy at petdance.com>"
430
431
432
433perl v5.12.0 2010-04-11 WWW::Mechanize::FAQ(3)