1WWW::Mechanize::FAQ(3)User Contributed Perl DocumentationWWW::Mechanize::FAQ(3)
2
3
4
6 WWW::Mechanize::FAQ - Frequently Asked Questions about WWW::Mechanize
7
9 If your question isn't answered here in the FAQ, please turn to the
10 communities at:
11
12 * <http://perlmonks.org>
13 * The libwww-perl mailing list at <http://lists.perl.org>
14
16 I have this web page that has JavaScript on it, and my Mech program
17 doesn't work.
18
19 That's because WWW::Mechanize doesn't operate on the JavaScript. It
20 only understands the HTML parts of the page.
21
22 I thought Mech was supposed to work like a web browser.
23
24 It does pretty much, but it doesn't support JavaScript.
25
26 I added some basic attempts at picking up URLs in "window.open()" calls
27 and return them in "$mech->links". They work sometimes. Beyond that,
28 there's no support for JavaScript.
29
30 Are you going to add JavaScript support?
31
32 I will if anyone sends me the code to do it. I'm not going to write a
33 JavaScript processor myself.
34
35 Wouldn't that be a great thing to have in WWW::Mechanize?
36
37 Yes.
38
39 Would it be hard to do?
40
41 Hard enough that I don't want to deal with it myself. Plus, I don't
42 use JavaScript myself, so I don't have an itch to scratch.
43
44 Is anyone working on it?
45
46 I've heard noises from people every so often over the past couple of
47 years, but nothing you'd pin your hopes on.
48
49 It would really help me with a project I'm working on.
50
51 I'm sure it would.
52
53 Do you know when it might get added?
54
55 I have no idea if or when such a thing will ever get done. I can guar‐
56 antee that as soon as there's anything close to JavaScript support I
57 will let everyone know.
58
59 Maybe I'll ask around and see if anyone else knows of a solution.
60
61 If you must, but I doubt that anyone's written JavaScript support for
62 Mechanize and neglected to tell me about it.
63
64 So what can I do?
65
66 Since Javascript is completely visible to the client, it cannot be used
67 to prevent a scraper from following links. But it can make life diffi‐
68 cult, and until someone writes a Javascript interpreter for Perl or a
69 Mechanize clone to control Firefox, there will be no general solution.
70 But if you want to scrape specific pages, then a solution is always
71 possible.
72
73 One typical use of Javascript is to perform argument checking before
74 posting to the server. The URL you want is probably just buried in the
75 Javascript function. Do a regular expression match on "$mech->con‐
76 tent()" to find the link that you want and "$mech->get" it directly
77 (this assumes that you know what your are looking for in advance).
78
79 In more difficult cases, the Javascript is used for URL mangling to
80 satisfy the needs of some middleware. In this case you need to figure
81 out what the Javascript is doing (why are these URLs always really
82 long?). There is probably some function with one or more arguments
83 which calculates the new URL. Step one: using your favorite browser,
84 get the before and after URLs and save them to files. Edit each file,
85 converting the the argument separators ('?', '&' or ';') into newlines.
86 Now it is easy to use diff or comm to find out what Javascript did to
87 the URL. Step 2 - find the function call which created the URL - you
88 will need to parse and interpret its argument list. Using the
89 Javascript Debugger Extension for Firefox may help with the analysis.
90 At this point, it is fairly trivial to write your own function which
91 emulates the Javascript for the pages you want to process.
92
93 Here's annother approach that answers the question, "It works in Fire‐
94 fox, but why not Mech?" Everything the web server knows about the
95 client is present in the HTTP request. If two requests are identical,
96 the results should be identical. So the real question is "What is dif‐
97 ferent between the mech request and the Firefox request?"
98
99 The Firefox extension "Tamper Data" is an effective tool for examining
100 the headers of the requests to the server. Compare that with what LWP
101 is sending. Once the two are identical, the action of the server should
102 be the same as well.
103
104 I say "should", because this is an oversimplification - some values are
105 naturally unique, e.g. a SessionID, but if a SessionID is present, that
106 is probably sufficient, even though the value will be different between
107 the LWP request and the Firefox request. The server could use the ses‐
108 sion to store information which is troublesome, but that's not the
109 first place to look (and highly unlikely to be relevant when you are
110 requesting the login page of your site).
111
112 Generally the problem is to be found in missing or incorrect POSTDATA
113 arguments, Cookies, User-Agents, Accepts, etc. If you are using mech,
114 then redirects and cookies should not be a problem, but are listed here
115 for completeness. If you are missing headers, "$mech->add_header" can
116 be used to add the headers that you need.
117
119 Can I do [such-and-such] with WWW::Mechanize?
120
121 If it's possible with LWP::UserAgent, then yes. WWW::Mechanize is a
122 subclass of LWP::UserAgent, so all the wondrous magic of that class is
123 inherited.
124
125 How do I use WWW::Mechanize through a proxy server?
126
127 See the docs in LWP::UserAgent on how to use the proxy. Short version:
128
129 $mech->proxy(['http', 'ftp'], 'http://proxy.example.com:8000/');
130
131 or get the specs from the environment:
132
133 $mech->env_proxy();
134
135 # Environment set like so:
136 gopher_proxy=http://proxy.my.place/
137 wais_proxy=http://proxy.my.place/
138 no_proxy="localhost,my.domain"
139 export gopher_proxy wais_proxy no_proxy
140
141 How can I see what fields are on the forms?
142
143 Use the mech-dump utility, optionaly installed with Mechanize.
144
145 $ mech-dump --forms http://search.cpan.org
146 Dumping forms
147 GET http://search.cpan.org/search
148 query=
149 mode=all (option) [*all⎪module⎪dist⎪author]
150 <NONAME>=CPAN Search (submit)
151
152 How do I get Mech to handle authentication?
153
154 use MIME::Base64;
155
156 my $agent = WWW::Mechanize->new();
157 my @args = (
158 Authorization => "Basic " .
159 MIME::Base64::encode( USER . ':' . PASS )
160 );
161
162 $agent->credentials( ADDRESS, REALM, USER, PASS );
163 $agent->get( URL, @args );
164
165 How can I get WWW::Mechanize to execute this JavaScript?
166
167 You can't. JavaScript is entirely client-based, and WWW::Mechanize is
168 a client that doesn't understand JavaScript. See the top part of this
169 FAQ.
170
171 How do I check a checkbox that doesn't have a value defined?
172
173 Set it to to the value of "on".
174
175 $mech->field( my_checkbox => 'on' );
176
177 How do I handle frames?
178
179 You don't deal with them as frames, per se, but as links. Extract them
180 with
181
182 my @frame_links = $mech->find_link( tag => "frame" );
183
184 How do I get a list of HTTP headers and their values?
185
186 All HTTP::Headers methods work on a HTTP::Response object which is
187 returned by the get(), reload(), response()/res(), click(), sub‐
188 mit_form(), and request() methods.
189
190 my $mech = WWW::Mechanize->new( autocheck => 1 );
191 $mech->get( 'http://my.site.com' );
192 my $res = $mech->response();
193 for my $key ( $response->header_field_names() ) {
194 print $key, " : ", $response->header( $key ), "\n";
195 }
196
198 My Mech program doesn't work, but it works in the browser.
199
200 Mechanize acts like a browser, but apparently something you're doing is
201 not matching the browser's behavior. Maybe it's expecting a certain
202 web client, or maybe you've not handling a field properly. For some
203 reason, your Mech problem isn't doing exactly what the browser is
204 doing, and when you find that, you'll have the answer.
205
206 My Mech program gets these 500 errors.
207
208 A 500 error from the web server says that the program on the server
209 side died. Probably the web server program was expecting certain
210 inputs that you didn't supply, and instead of handling it nicely, the
211 program died.
212
213 Whatever the cause of the 500 error, if it works in the browser, but
214 not in your Mech program, you're not acting like the browser. See the
215 previous question.
216
217 Why doesn't my program handle this form correctly?
218
219 Run mech-dump on your page and see what it says.
220
221 mech-dump is a marvelous diagnostic tool for figuring out what forms
222 and fields are on the page. Say you're scraping CNN.com, you'd get
223 this:
224
225 $ mech-dump http://www.cnn.com/
226 GET http://search.cnn.com/cnn/search
227 source=cnn (hidden readonly)
228 invocationType=search/top (hidden readonly)
229 sites=web (radio) [*web/The Web ??⎪cnn/CNN.com ??]
230 query= (text)
231 <NONAME>=Search (submit)
232
233 POST http://cgi.money.cnn.com/servlets/quote_redirect
234 query= (text)
235 <NONAME>=GET (submit)
236
237 POST http://polls.cnn.com/poll
238 poll_id=2112 (hidden readonly)
239 question_1=<UNDEF> (radio) [1/Simplistic option⎪2/VIEW RESULTS]
240 <NONAME>=VOTE (submit)
241
242 GET http://search.cnn.com/cnn/search
243 source=cnn (hidden readonly)
244 invocationType=search/bottom (hidden readonly)
245 sites=web (radio) [*web/??CNN.com⎪cnn/??]
246 query= (text)
247 <NONAME>=Search (submit)
248
249 Four forms, including the first one duplicated at the end. All the
250 fields, all their defaults, lovingly generated by HTML::Form's "dump"
251 method.
252
253 If you want to run mech-dump on something that doesn't lend itself to a
254 quick URL fetch, then use the "save_content()" method to write the HTML
255 to a file, and run mech-dump on the file.
256
257 Why don't https:// URLs work?
258
259 You need either IO::Socket::SSL or Crypt::SSLeay installed.
260
261 Why do I get "Input 'fieldname' is readonly"?
262
263 You're trying to change the value of a hidden field and you have warn‐
264 ings on.
265
266 First, make sure that you actually mean to change the field that you're
267 changing, and that you don't have a typo. Usually, hidden variables
268 are set by the site you're working on for a reason. If you change the
269 value, you might be breaking some functionality by faking it out.
270
271 If you really do want to change a hidden value, make the changes in a
272 scope that has warnings turned off:
273
274 {
275 local $^W = 0;
276 $agent->field( name => $value );
277 }
278
279 I tried to [such-and-such] and I got this weird error.
280
281 Are you checking your errors?
282
283 Are you sure?
284
285 Are you checking that your action succeeded after every action?
286
287 Are you sure?
288
289 For example, if you try this:
290
291 $mech->get( "http://my.site.com" );
292 $mech->follow_link( "foo" );
293
294 and the "get" call fails for some reason, then the Mech internals will
295 be unusable for the "follow_link" and you'll get a weird error. You
296 must, after every action that GETs or POSTs a page, check that Mech
297 succeeded, or all bets are off.
298
299 $mech->get( "http://my.site.com" );
300 die "Can't even get the home page: ", $mech->response->status_line
301 unless $mech->success;
302
303 $mech->follow_link( "foo" );
304 die "Foo link failed: ", $mech->response->status_line
305 unless $mech->success;
306
307 How do I figure out why "$mech->get($url)" doesn't work?
308
309 There are many reasons why a "get()" can fail. The server can take you
310 to someplace you didn't expect. It can generate redirects which are not
311 properly handled. You can get time-outs. Servers are down more often
312 than you think! etc, etc, etc. A couple of places to start:
313
314 1 Check "$mech->status()" after each call
315 2 Check the URL with "$mech->uri()" to see where you ended up
316 3 Try debugging with "LWP::Debug".
317
318 If things are really strange, turn on debugging with "use LWP::Debug
319 qw(+);" Just put this in the main program. This causes LWP to print out
320 a trace of the HTTP traffic between client and server and can be used
321 to figure out what is happening at the protocol level.
322
323 It is also useful to set many traps to verify that processing is pro‐
324 ceeding as expected. A Mech program should always have an "I didn't
325 expect to get here" or "I don't recognize the page that I am process‐
326 ing" case and bail out.
327
328 Since errors can be transient, by the time you notice that the error
329 has occurred, it might not be possible to reproduce it manually. So for
330 automated processing it is useful to email yourself the following
331 information:
332
333 * where processing is taking place
334 * An Error Message
335 * $mech->uri
336 * $mech->content
337
338 You can also save the content of the page with "$mech->save_content(
339 'filename.html' );"
340
341 I submitted a form, but the server ignored everything! I got an empty
342 form back!
343
344 The post is handled by application software. It is common for PHP pro‐
345 grammers to use the same file both to display a form and to process the
346 arguments returned. So the first task of the application programmer is
347 to decide whether there are arguments to processes. The program can
348 check whether a particular parameter has been set, whether a hidden
349 parameter has been set, or whether the submit button has been clicked.
350 (There are probably other ways that I haven't thought of).
351
352 In any case, if your form is not setting the parameter (e.g. the submit
353 button) which the web application is keying on (and as an outsider
354 there is no way to know what it is keying on), it will not notice that
355 the form has been submitted. Try using "$mech->click()" instead of
356 "$mech->submit()" or vice-versa.
357
358 I've logged in to the server, but I get 500 errors when I try to get to
359 protected content.
360
361 Some web sites use distributed databases for their processing. It can
362 take a few seconds for the login/session information to percolate
363 through to all the servers. For human users with their slow reaction
364 times, this is not a problem, but a Perl script can outrun the server.
365 So try adding a sleep(5) between logging in and actually doing anything
366 (the optimal delay must be determined experimentally).
367
368 Mech is a big memory pig! I'm running out of RAM!
369
370 Mech keeps a history of every page, and the state it was in. It actu‐
371 ally keeps a clone of the full Mech object at every step along the way.
372
373 You can limit this stack size with the "stack_depth" parm in the
374 "new()" constructor.
375
377 Copyright 2005 Andy Lester "<andy at petdance.com>"
378
379
380
381perl v5.8.8 2007-10-30 WWW::Mechanize::FAQ(3)