1globus_gram_job_manager_intergflaocbeu_sgtlugotrboaurmsi_ajglor(ba3m)m_ajnoabg_emranager_interface_tutorial(3)
2
3
4
6 globus_gram_job_manager_interface_tutorial - GRAM Job Manager Scheduler
7 Tutorial This tutorial describes the steps needed to build a GRAM Job
8 Manager Scheduler interface package.
9
10 The audience for this tutorial is a person interested in adding support
11 for a new scheduler interface to GRAM. This tutorial will assume some
12 familiarty with GTP, autoconf, automake, and Perl. As a reference
13 point, this tutorial will refer to the code in the LSF Job Manager
14 package.
15
17 This section deals with writing the perl module which implements the
18 interface between the GRAM job manager and the local scheduler. Consult
19 the Job Manager Scheduler Interface section of this manual for a more
20 detailed reference on the Perl modules which are used here.
21
22 The scheduler interface is implemented as a Perl module which is a
23 subclass of the Globus::GRAM::JobManager module. Its name must match
24 the scheduler type string used when the service is installed. For the
25 LSF scheduler, the name is lsf, so the module name is
26 Globus::GRAM::JobManager::lsf and it is stored in the file lsf.pm.
27 Though there are several methods in the JobManager interface, they only
28 ones which absolutely need to be implemented in a scheduler module are
29 submit, poll, cancel.
30
31 We'll begin by looking at the start of the lsf source module, lsf.in
32 (the transformation to lsf.pm happens when the setup script is run. To
33 begin the script, we import the GRAM support modules into the scheduler
34 module's namespace, declare the module's namespace, and declare this
35 module as a subclass of the Globus::GRAM::JobManager module. All
36 scheduler packages will need to do this, substituting the name of the
37 scheduler type being implemented where we see lsf below.
38
39 use Globus::GRAM::Error;
40 use Globus::GRAM::JobState;
41 use Globus::GRAM::JobManager;
42 use Globus::Core::Paths;
43
44
45 package Globus::GRAM::JobManager::lsf;
46
47 @ISA = qw(Globus::GRAM::JobManager);
48
49 Next, we declare any system-specifc values which will be substituted
50 when the setup package scripts are run. In the LSF case, we need the
51 know the paths to a few programs which interact with the scheduler:
52
53 my ($mpirun, $bsub, $bjobs, $bkill);
54
55 BEGIN
56 {
57 $mpirun = '@MPIRUN@';
58 $bsub = '@BSUB@';
59 $bjobs = '@BJOBS@';
60 $bkill = '@BKILL@';
61 }
62
63 The values surrounded by the at-sign (such as @MPIRUN@) will be
64 replaced by with the path to the named programs by the find-lsf-tools
65 script described below.
66
67 Writing a constructor
68 For scheduler interfaces which need to setup some data before calling
69 their other methods, they can overload the new method which acts as a
70 constructor. Scheduler scripts which don't need any per-instance
71 initialization will not need to provide a constructor, the
72 Globus::GRAM::JobManager constructor will do the job.
73
74 If you do need to overloaded this method, be sure to call the
75 JobManager module's constructor to allow it to do its initialization,
76 as in this example:
77
78 sub new
79 {
80 my $proto = shift;
81 my $class = ref($proto) || $proto;
82 my $self = $class->SUPER::new(@_);
83
84 ## Insert scheduler-specific startup code here
85
86 return $self;
87 }
88
89 The job interface methods are called with only one argument, the
90 scheduler object itself. That object contains the a
91 Globus::GRAM::JobDescription object ($self->{JobDescription}) which
92 includes the values from the RSL string associated with the request, as
93 well as a few extra values:
94
95 job_id
96 The string returned as the value of JOB_ID in the return hash from
97 submit. This won't be present for methods called before the job is
98 submitted.
99
100 uniq_id
101 A string associated with this job request by the job manager
102 program. It will be unique for all jobs on a host for all time.
103
104 cache_tag
105 The GASS cache tag related to this job submission. Files in the
106 cache with this tag will be cleaned by the cleanup_cache() method.
107
108 Now, let's look at the methods which will interface to the scheduler.
109
110 Submitting Jobs
111 All scheduler modules must implement the submit method. This method is
112 called when the job manager wishes to submit the job to the scheduler.
113 The information in the original job request RSL string is available to
114 the scheduler interface through the JobDescription data member of it's
115 hash.
116
117 For most schedulers, this is the longest method to be implemented, as
118 it must decide what to do with the job description, and convert them to
119 something which the scheduler can understand.
120
121 We'll look at some of the steps in the LSF manager code to see how the
122 scheduler interface is implemented.
123
124 In the beginning of the submit method, we'll get our parameters and
125 look up the job description in the manager-specific object:
126
127 sub submit
128 {
129 my $self = shift;
130 my $description = $self->{JobDescription};
131
132 Then we will check for values of the job parameters that we will be
133 handling. For example, this is how we check for a valid job type in the
134 LSF scheduler interface:
135
136 if(defined($description->jobtype())
137 {
138 if($description->jobtype !~ /^(mpi|single|multiple)$/)
139 {
140 return Globus::GRAM::Error::JOBTYPE_NOT_SUPPORTED;
141 }
142 elsif($description->jobtype() eq 'mpi' && $mpirun eq 'no')
143 {
144 return Globus::GRAM::Error::JOBTYPE_NOT_SUPPORTED;
145 }
146 }
147
148 The lsf module supports most of the core RSL attributes, so it does
149 more processing to determine what to do with the values in the job
150 description.
151
152 Once we've inspected the JobDescription we'll know what we need to tell
153 the scheduler about so that it'll start the job properly. For LSF, we
154 will construct a job description script and pass that to the bsub
155 command. This script is a bourne shell script with some special
156 comments which LSF uses to decide what constraints to use when
157 scheduling the job.
158
159 First, we'll open the new file, and write the file header:
160
161 $lsf_job_script = new IO::File($lsf_job_script_name, '>');
162
163 $lsf_job_script->print<<EOF;
164 #! /bin/sh
165 #
166 # LSF batch job script built by Globus Job Manager
167 #
168 EOF
169
170 Then, we'll add some special comments to pass job constraints to LSF:
171
172 if(defined($queue))
173 {
174 $lsf_job_script->print('#BSUB -q $queue0);
175 }
176 if(defined($description->project()))
177 {
178 $lsf_job_script->print('#BSUB -P ' . $description->project() . '0);
179 }
180
181 Before we start the executable in the LSF job description script, we
182 will quote and escape the job's arguments so that they will be passed
183 to the application as they were in the job submission RSL string:
184
185 At the end of the job description script, we actually run the
186 executable named in the JobDescription. For LSF, we support a few
187 different job types which require different startup commands. Here, we
188 will quote and escape the strings in the argument list so that the
189 values of the arguments will be identical to those in the initial job
190 request string. For this Bourne-shell syntax script, we will double-
191 quote each argument, and escaping the backslash (\), dollar-sign ($),
192 double-quote ("), and single-quote (') characters. We will use this new
193 string later in the script.
194
195 @arguments = $description->arguments();
196
197 foreach(@arguments)
198 {
199 if(ref($_))
200 {
201 return Globus::GRAM::Error::RSL_ARGUMENTS;
202 }
203 }
204 if($arguments[0])
205 {
206 foreach(@arguments)
207 {
208 $_ =~ s/\/\\/g;
209 $_ =~ s/\g;
210 $_ =~ s/'/\´/g;
211 $_ =~ s/`/\`/g;
212
213 $args .= ''' . $_ . '' ';
214 }
215 }
216 else
217 {
218 $args = '';
219 }
220
221 To end the LSF job description script, we will write the command line
222 of the executable to the script. Depending on the job type of this
223 submission, we will need to start either one or more instances of the
224 executable, or the mpirun program which will start the job with the
225 executable count in the JobDescription:
226
227 if($description->jobtype() eq 'mpi')
228 {
229 $lsf_job_script->print('$mpirun -np ' . $description->count() . ' ');
230
231 $lsf_job_script->print($description->executable()
232 . ' $args 0);
233 }
234 elsif($description->jobtype() eq 'multiple')
235 {
236 for(my $i = 0; $i < $description->count(); $i++)
237 {
238 $lsf_job_script->print($description->executable() . ' $args &0);
239 }
240 $lsf_job_script->print('wait0);
241 }
242 else
243 {
244 $lsf_job_script->print($description->executable() . ' $args0);
245 }
246
247 Next, we submit the job to the scheduler. Be sure to close the script
248 file before trying to redirect it into the submit command, or some of
249 the script file may be buffered and things will fail in strange ways!
250
251 When the submission command returns, we check its output for the
252 scheduler-specific job identifier. We will use this value to be able to
253 poll or cancel the job.
254
255 The return value of the script should be either a GRAM error object or
256 a reference to a hash of values. The Globus::GRAM::JobManager
257 documentation lists the valid keys to that hash. For the submit method,
258 we'll return the job identifier as the value of JOB_ID in the hash. If
259 the scheduler returned a job status result, we could return that as
260 well. LSF does not, so we'll just check for the job ID and return it,
261 or if the job fails, we'll return an error object:
262
263 $lsf_job_script->close();
264
265 $job_id = (grep(/is submitted/,
266 split(/0, `$bsub < $lsf_job_script_name`)))[0];
267 if($? == 0)
268 {
269 $job_id =~ m/<([^>]*)>/;
270 $job_id = $1;
271
272 return { JOB_ID => $job_id };
273 }
274
275 return Globus::GRAM::Error::INVALID_SCRIPT_REPLY;
276 }
277
278 That finishes the submit method. Most of the functionality for the
279 scheduler interface is now written. We just have a few more (much
280 shorter) methods to implement.
281
282 Polling Jobs
283 All scheduler modules must also implement the poll method. The purpose
284 of this method is to check for updates of a job's status, for example,
285 to see if a job has finished.
286
287 When this method is called, we'll get the job ID (which we returned
288 from the submit method above) as well as the original job request
289 information in the object's JobDescription. In the LSF script, we'll
290 pass the job ID to the bjobs program, and that will return the job's
291 status information. We'll compare the status field from the bjobs
292 output to see what job state we should return.
293
294 If the job fails, and there is a way to determine that from the
295 scheduler, then the script should return in its hash both
296
297 JOB_STATE => Globus::GRAM::JobState::FAILED
298
299
300 and
301
302 ERROR => Globus::GRAM::Error::<ERROR_TYPE>->value
303
304
305 Here's an excerpt from the LSF scheduler module implementation:
306
307 sub poll
308 {
309 my $self = shift;
310 my $description = $self->{JobDescription};
311 my $job_id = $description->jobid();
312 my $state;
313 my $status_line;
314
315 $self->log('polling job $job_id');
316
317 # Get first line matching job id
318 $_ = (grep(/$job_id/, `$bjobs $job_id 2>/dev/null`))[0];
319
320 # Get 3th field (status)
321 $_ = (split(/))[2];
322
323 if(/PEND/)
324 {
325 $state = Globus::GRAM::JobState::PENDING;
326 }
327 elsif(/USUSP|SSUSP|PSUSP/)
328 {
329 $state = Globus::GRAM::JobState::SUSPENDED
330 }
331 ...
332 return {JOB_STATE => $state};
333 }
334
335 Cancelling Jobs
336 All scheduler modules must also implement the cancel method. The
337 purpose of this method is to cancel a running job.
338
339 As with the poll method described above, this method will be given the
340 job ID as part of the JobDescription object held by the manager object.
341 If the scheduler interface provides feedback that the job was cancelled
342 successfully, then we can return a JOB_STATE change to the FAILED
343 state. Otherwise we can return an empty hash reference, and let the
344 poll method return the state change next time it is called.
345
346 To process a cancel in the LSF case, we will run the bkill command with
347 the job ID.
348
349 sub cancel
350 {
351 my $self = shift;
352 my $description = $self->{JobDescription};
353 my $job_id = $description->jobid();
354
355 $self->log('cancel job $job_id');
356
357 system('$bkill $job_id >/dev/null 2>/dev/null');
358
359 if($? == 0)
360 {
361 return { JOB_STATE => Globus::GRAM::JobState::FAILED }
362 }
363 return Globus::GRAM::Error::JOB_CANCEL_FAILED;
364
365 }
366
367 End of the script
368 It is required that all perl modules return a non-zero value when they
369 are parsed. To do this, make sure the last line of your module consists
370 of:
371
372 1;
373
375 Once we've written the job manager script, we need to get it installed
376 so that the gatekeeper will be able to run our new service. We do this
377 by writing a setup script. For LSF, we will write the script setup-
378 globus-job-manager-lsf.pl, which we will list in the LSF package as the
379 Post_Install_Program.
380
381 To set up the Gatekeeper service, our LSF setup script does the
382 following:
383
384 1. Perform system-specific configuration.
385
386 2. Install the GRAM scheduler Perl module and register as a gatekeeper
387 service.
388
389 3. (Optional) Install an RSL validation file defining extra scheduler-
390 specific RSL attributes which the scheduler interface will support.
391
392 4. Update the GPT metadata to indicate that the job manager service
393 has been set up.
394
395 System-Specific Configuration
396 First, our scheduler setup script probes for any system-specific
397 information needed to interface with the local scheduler. For example,
398 the LSF scheduler uses the mpirun, bsub, bqueues, bjobs, and bkill
399 commands to submit, poll, and cancel jobs. We'll assume that the
400 administrator who is installing the package has these commands in their
401 path. We'll use an autoconf script to locate the executable paths for
402 these commands and substitute them into our scheduler Perl module. In
403 the LSF package, we have the find-lsf-tools script, which is generated
404 during bootstrap by autoconf from the find-lsf-tools.in file:
405
406 ## Required Prolog
407
408 AC_REVISION($Revision: 1.5 $)
409 AC_INIT(lsf.in)
410
411 # checking for the GLOBUS_LOCATION
412
413 if test 'x$GLOBUS_LOCATION' = 'x'; then
414 echo 'ERROR Please specify GLOBUS_LOCATION' >&2
415 exit 1
416 fi
417
418
419 ## Check for optional tools, warn if not found
420
421 AC_PATH_PROG(MPIRUN, mpirun, no)
422 if test '$MPIRUN' = 'no' ; then
423 AC_MSG_WARN([Cannot locate mpirun])
424 fi
425
426
427 ## Check for required tools, error if not found
428
429 AC_PATH_PROG(BSUB, bsub, no)
430 if test '$BSUB' = 'no' ; then
431 AC_MSG_ERROR([Cannot locate bsub])
432 fi
433
434
435 ## Required epilog - update scheduler specific module
436
437 prefix='$(GLOBUS_LOCATION)'
438 exec_prefix='$(GLOBUS_LOCATION)'
439 libexecdir=${prefix}/libexec
440
441 AC_OUTPUT(
442 lsf.pm:lsf.in
443 )
444
445 If this script exits with a non-zero error code, then the setup script
446 propagates the error to the caller and exits without installing the
447 service.
448
449 Registering as a Gatekeeper Service
450 Next, the setup script installs it's perl module into the perl library
451 directory and registers an entry in the Globus Gatekeeper's service
452 directory. The program globus-job-manager-service (distributed in the
453 job manager program setup package) performs both of these tasks. When
454 run, it expects the scheduler perl module to be located in the
455 $GLOBUS_LOCATION/setup/globus directory.
456
457 $libexecdir/globus-job-manager-service -add -m lsf -s jobmanager-lsf;
458
459 Installing an RSL Validation File
460 If the scheduler script implements RSL attributes which are not part of
461 the core set supported by the job manager, it must publish them in the
462 job manager's data directory. If the scheduler script wants to set some
463 default values of RSL attributes, it may also set those as the default
464 values in the validation file.
465
466 The format of the validation file is described in the RSL Validation
467 File Format section of the documentation. The validation file must be
468 named scheduler-type.rvf and installed in the
469 $GLOBUS_LOCATION/share/globus_gram_job_manager directory.
470
471 In the LSF setup script, we check the list of queues supported by the
472 local LSF installation, and add a section of acceptable values for the
473 queue RSL attribute:
474
475 open(VALIDATION_FILE,
476 '>$ENV{GLOBUS_LOCATION}/share/globus_gram_job_manager/lsf.rvf');
477
478 # Customize validation file with queue info
479 open(BQUEUES, 'bqueues -w |');
480
481 # discard header
482 $_ = <BQUEUES>;
483 my @queues = ();
484
485 while(<BQUEUES>)
486 {
487 chomp;
488
489 $_ =~ m/^()/;
490
491 push(@queues, $1);
492 }
493 close(BQUEUES);
494
495 if(@queues)
496 {
497 print VALIDATION_FILE 'Attribute: queue0;
498 print VALIDATION_FILE join(' ', 'Values:', @queues);
499
500 }
501 close VALIDATION_FILE;
502
503 Updating GPT Metadata
504 Finally, the setup package should create and finalize a
505 Grid::GPT::Setup. The value of $package must be the same value as the
506 gpt_package_metadata Name attribute in the package's metadata file. If
507 either the new() or finish() methods fail, then it is considered good
508 practice to clean up any files created by the setup script. From setup-
509 globus-job-manager-lsf.pl:
510
511 my $metadata =
512 new Grid::GPT::Setup(
513 package_name => 'globus_gram_job_manager_setup_lsf');
514
515
516 $metadata->finish();
517
519 Now that we've written a job manager scheduler interface, we'll package
520 it using GPT to make it easy for our users to build and install. We'll
521 start by gathering the different files we've written above into a
522 single directory lsf.
523
524 · lsf.in
525
526 · find-lsf-tools.in
527
528 · setup-globus-job-manager.pl
529
530 Package Documentation
531 If there are any scheduler-specific options defined for this scheduler
532 module, or if there any any optional setup items, then it is good to
533 provide a documentation page which describes these. For LSF, we
534 describe the changes since the last version of this package in the file
535 globus_gram_job_manager_lsf.dox. This file consists of a doxygen
536 mainpage. See www.doxygen.org for information on how to write
537 documentation with that tool.
538
539 configure.in
540 Now, we'll write our configure.in script. This file is converted to the
541 configure shell script by the bootstrap script below. Since we don't do
542 any probes for compile-time tools or system characteristics, we just
543 call the various initialization macros used by GPT, declare that we may
544 provide doxygen documentation, and then output the files we need
545 substitions done on.
546
547 AC_REVISION($Revision: 1.5 $)
548 AC_INIT(Makefile.am)
549
550 GLOBUS_INIT
551 AM_PROG_LIBTOOL
552
553 dnl Initialize the automake rules the last argument
554 AM_INIT_AUTOMAKE($GPT_NAME, $GPT_VERSION)
555
556 LAC_DOXYGEN('../', '*.dox')
557
558 GLOBUS_FINALIZE
559
560 AC_OUTPUT(
561 Makefile
562 pkgdata/Makefile
563 pkgdata/pkg_data_src.gpt
564 doxygen/Doxyfile
565 doxygen/Doxyfile-internal
566 doxygen/Makefile
567 )
568
569 Package Metadata
570 Now we'll write our metadata file, and put it in the pkgdata
571 subdirectory of our package. The important things to note in this file
572 are the package name and version, the post_install_program, and the
573 setup sections. These define how the package distribution will be
574 named, what command will be run by gpt-postinstall when this package is
575 installed, and what the setup dependencies will be written when the
576 Grid::GPT::Setup object is finalized.
577
578 <?xml version='1.0' encoding='UTF-8'?>
579 <!DOCTYPE gpt_package_metadata SYSTEM 'package.dtd'>
580
581 <gpt_package_metadata Format_Version='0.02' Name='globus_gram_job_manager_setup_lsf' >
582
583 <Aging_Version Age='0' Major='1' Minor='0' />
584 <Description >LSF Job Manager Setup</Description>
585 <Functional_Group >ResourceManagement</Functional_Group>
586 <Version_Stability Release='Beta' />
587 <src_pkg >
588
589 <With_Flavors build='no' />
590 <Source_Setup_Dependency PkgType='pgm' >
591 <Setup_Dependency Name='globus_gram_job_manager_setup' >
592 <Version >
593 <Simple_Version Major='3' />
594 </Version>
595 </Setup_Dependency>
596 <Setup_Dependency Name='globus_common_setup' >
597 <Version >
598 <Simple_Version Major='2' />
599 </Version>
600 </Setup_Dependency>
601 </Source_Setup_Dependency>
602
603 <Build_Environment >
604 <cflags >@GPT_CFLAGS@</cflags>
605 <external_includes >@GPT_EXTERNAL_INCLUDES@</external_includes>
606 <pkg_libs > </pkg_libs>
607 <external_libs >@GPT_EXTERNAL_LIBS@</external_libs>
608 </Build_Environment>
609
610 <Post_Install_Message >
611 Run the setup-globus-job-manager-lsf setup script to configure an
612 lsf job manager.
613 </Post_Install_Message>
614
615 <Post_Install_Program >
616 setup-globus-job-manager-lsf
617 </Post_Install_Program>
618
619 <Setup Name='globus_gram_job_manager_service_setup' >
620 <Aging_Version Age='0' Major='1' Minor='0' />
621 </Setup>
622
623 </src_pkg>
624
625 </gpt_package_metadata>
626
627 Automake Makefile.am
628 The automake Makefile.am for this package is short because there isn't
629 any compilation needed for this package. We just need to define what
630 needs to be installed into which directory, and what source files need
631 to be put inot our source distribution. For the LSF package, we need to
632 list the lsf.in, find-lsf-tools, and setup-globus-job-manager-lsf.pl
633 scripts as files to be installed into the setup directory. We need to
634 add those files plus our documentation source file to the EXTRA_LIST
635 variable so that they will be included in source distributions. The
636 rest of the lines in the file are needed for proper interaction with
637 GPT.
638
639 include $(top_srcdir)/globus_automake_pre
640 include $(top_srcdir)/globus_automake_pre_top
641
642 SUBDIRS = pkgdata doxygen
643
644 setup_SCRIPTS = lsf.in find-lsf-tools setup-globus-job-manager-lsf.pl
645
646 EXTRA_DIST = $(setup_SCRIPTS) globus_gram_job_manager_lsf.dox
647
648 include $(top_srcdir)/globus_automake_post
649 include $(top_srcdir)/globus_automake_post_top
650
651 Bootstrap
652 The final piece we need to write for our package is the bootstrap
653 script. This script is the standard bootstrap script for a globus
654 package, with an extra line to generate the fine-lsf-tools script using
655 autoconf.
656
657 #!/bin/sh
658
659 # checking for the GLOBUS_LOCATION
660
661 if test 'x$GLOBUS_LOCATION' = 'x'; then
662 echo 'ERROR Please specify GLOBUS_LOCATION' >&2
663 exit 1
664 fi
665
666 if [ ! -f ${GLOBUS_LOCATION}/libexec/globus-bootstrap.sh ]; then
667 echo 'ERROR: Unable to locate GLOBUS_LOCATION}/libexec/globus-bootstrap.sh'
668 echo ' Please ensure that you have installed the globus-core package and'
669 echo ' that GLOBUS_LOCATION is set to the proper directory'
670 exit
671 fi
672
673
674 autoconf find-lsf-tools.in > find-lsf-tools
675 chmod 755 find-lsf-tools
676
677 exit 0
678
680 With this all done, we can now try to build our now package. To do so,
681 we'll need to run
682
683 % ./bootstrap
684 % ./globus-build
685
686 If all of the files are written correctly, this should result in our
687 package being installed into $GLOBUS_LOCATION. Once that is done, we
688 should be able to run gpt-postinstall to configure our new job manager.
689
690 Now, we should be able to run the command
691
692 % globus-personal-gatekeeper -start -jmtype lsf
693
694 to start a gatekeeper configured to run a job manager using our new
695 scripts. Running this will output a contact string (referred to as
696 <contact-string> below), which we can use to connect to this new
697 service. To do so, we'll run globus-job-run to submit a test job:
698
699 % globus-job-run <contact-string> /bin/echo Hello, LSF
700 Hello, LSF
701
702 When Things Go Wrong
703 If the test above fails, or more complicated job failures are
704 occurring, then you'lll have to debug your scheduler interface. Here
705 are a few tips to help you out.
706
707 Make sure that your script is valid Perl. If you run
708
709 perl -I$GLOBUS_LOCATION/lib/perl $GLOBUS_LOCATION/lib/perl/Globus/GRAM/JobManager/lsf.pm
710
711 You should get no output. If there are any diagnostics, correct them
712 (in the lsf.in file), reinstall your package, and rerun the setup
713 script.
714
715 Look at the Globus Toolkit Error FAQ and see if the failure is perhaps
716 not related to your scheduler script at all.
717
718 Enable logging for the job manager. By default, the job manager is
719 configured to log only when it notices a job failure. However, if your
720 problem is that your script is not returning a failure code when you
721 expect, you might want to enable logging always. To do this, modify the
722 job manager configuration file to contain '-save-logfile always'
723 in place of '-save-log on_error'.
724
725 Adding logging messages to your script: the JobManager object
726 implements a log method, which allows you to write messages to the job
727 manager log file. Do this as your methods are called to pinpoint where
728 the error occurs.
729
730 Save the job description file when your script is run. This will allow
731 you to run the globus-job-manager-script.pl interactively (or in the
732 Perl debugger). To save the job description file, you can do
733
734 $self->{JobDescription}->save('/tmp/job_description.$$');
735
736 in any of the methods you've implemented.
737
738
739
740Version 10.70 TugeloJbuuns_7gr2a0m1_1job_manager_interface_tutorial(3)