1just-man-pages/condor_ssh_to_Gjeonbe(r1a)l CommandsjuMsatn-umaaln-pages/condor_ssh_to_job(1)
2
3
4

Name

6       condor_ssh_to_job create an ssh session to a running job
7

Synopsis

9       condor_ssh_to_job [ -help ]
10
11       condor_ssh_to_job  [ -debug ] [ -name schedd-name ] [ -pool pool-name ]
12       [ -ssh ssh-command ] [ -keygen-options ssh-keygen-options ]  [  -shells
13       shell1,shell2,...  ] [ -auto-retry ] [ -remove-on-interrupt ] cluster |
14       cluster.process | cluster.process.node [ remote-command ]
15

Description

17       condor_ssh_to_job creates an ssh session to a running job. The  job  is
18       specified  with the argument. If only the job cluster id is given, then
19       the job process id defaults to the value 0.
20
21       condor_ssh_to_job is available  in  Unix  HTCondor  distributions,  and
22       works with two kinds of jobs: those in the vanilla, vm, java, local, or
23       parallel universes, and those jobs in the grid universe which  use  EC2
24       resources. It will not work with other grid universe jobs.
25
26       For  jobs  in  the vanilla, vm, java, local, or parallel universes, the
27       user must be the owner of the job or must be a queue  super  user,  and
28       both  the  condor_schedd  and  condor_starter  daemons  must allow con‐
29       dor_ssh_to_job access. If no remote-command is specified,  an  interac‐
30       tive  shell  is  created.  An alternate ssh program such as sftp may be
31       specified, using the -ssh option, for uploading and downloading files.
32
33       The remote command or shell runs with the same user id as  the  running
34       job,  and  it is initialized with the same working directory. The envi‐
35       ronment is initialized to be the same as that  of  the  job,  plus  any
36       changes  made  by the shell setup scripts and any environment variables
37       passed by the ssh client. In addition, the environment variable   _CON‐
38       DOR_JOB_PIDS  is  defined. It is a space-separated list of PIDs associ‐
39       ated with the job. At a minimum, the list will contain the PID  of  the
40       process  started  when  the  job was launched, and it will be the first
41       item in the list. It may contain additional  PIDs  of  other  processes
42       that the job has created.
43
44       The ssh session and all processes it creates are treated by HTCondor as
45       though they are processes belonging to the job. If  the  slot  is  pre‐
46       empted  or suspended, the ssh session is killed or suspended along with
47       the job. If the job exits before the ssh  session  finishes,  the  slot
48       remains  in the Claimed Busy state and is treated as though not all job
49       processes have exited until all ssh sessions are closed.  Multiple  ssh
50       sessions may be created to the same job at the same time. Resource con‐
51       sumption of the sshd process and all processes spawned by it are  moni‐
52       tored  by  the  condor_starter  as though these processes belong to the
53       job, so any policies such as  PREEMPT that enforce a limit on  resource
54       consumption  also  take into account resources consumed by the ssh ses‐
55       sion.
56
57       condor_ssh_to_job stores ssh keys in temporary  files  within  a  newly
58       created  and uniquely named directory. The newly created directory will
59       be within the directory defined by the environment variable   TMPDIR  .
60       When  the ssh session is finished, this directory and the ssh keys con‐
61       tained within it are removed.
62
63       See the HTCondor administrator's manual section  on  configuration  for
64       details of the configuration variables related to condor_ssh_to_job .
65
66       An  ssh  session works by first authenticating and authorizing a secure
67       connection between condor_ssh_to_job  and  the  condor_starter  daemon,
68       using  HTCondor protocols. The condor_starter generates an ssh key pair
69       and sends it securely to condor_ssh_to_job .  Then  the  condor_starter
70       spawns sshd in inetd mode with its stdin and stdout attached to the TCP
71       connection from condor_ssh_to_job .  condor_ssh_to_job acts as a  proxy
72       for  the  ssh client to communicate with sshd , using the existing con‐
73       nection authorized by HTCondor.  At no point is sshd listening  on  the
74       network  for connections or running with any privileges other than that
75       of the user identity running the job.  If CCB is being used  to  enable
76       connectivity  to the execute node from outside of a firewall or private
77       network, condor_ssh_to_job is able to make use of CCB in order to  form
78       the ssh connection.
79
80       The  login  shell  of  the  user  id running the job is used to run the
81       requested command, sshd subsystem, or interactive shell. This is  hard-
82       coded  behavior  in  OpenSSH and cannot be overridden by configuration.
83       This means that condor_ssh_to_job access is effectively disabled if the
84       login  shell  disables access, as in the example programs /bin/true and
85       /sbin/nologin .
86
87       condor_ssh_to_job is intended to work with OpenSSH as installed in typ‐
88       ical  environments.  It  does not work on Windows platforms. If the ssh
89       programs are installed in non-standard locations,  then  the  paths  to
90       these  programs will need to be customized within the HTCondor configu‐
91       ration. Versions of ssh other than OpenSSH  may  work,  but  they  will
92       likely  require  additional  configuration  of  command-line arguments,
93       changes to the sshd configuration template file, and possibly modifica‐
94       tion of the  $(LIBEXEC)/condor_ssh_to_job_sshd_setup script used by the
95       condor_starter to set up sshd .
96
97       For jobs in the grid universe which use EC2 resources, a  request  that
98       HTCondor  have  the  EC2  service  create a new key pair for the job by
99       specifying ec2_keypair_file causes condor_ssh_to_job to attempt to con‐
100       nect  to the corresponding instance via ssh . This attempts invokes ssh
101       directly, bypassing the HTCondor networking layer. It supplies ssh with
102       the  public  DNS name of the instance and the name of the file with the
103       new key pair's private key. For the connection to succeed, the instance
104       must  have  started an ssh server, and its security group(s) must allow
105       connections on port 22. Conventionally, images will allow logins  using
106       the key pair on a single specific account. Because ssh defaults to log‐
107       ging in as the current user, the -l <username> option or its equivalent
108       for  other versions of ssh will be needed as part of the remote-command
109       argument. Although the -X option does not apply to EC2 jobs, adding  -X
110       or -Y to the remote-command argument can duplicate the effect.
111

Options

113       -help
114
115          Display brief usage information and exit.
116
117
118
119       -debug
120
121          Causes  debugging  information  to be sent to  stderr , based on the
122          value of the configuration variable  TOOL_DEBUG .
123
124
125
126       -name schedd-name
127
128          Specify an alternate condor_schedd , if the default (local)  one  is
129          not desired.
130
131
132
133       -pool pool-name
134
135          Specify  an  alternate  HTCondor  pool,  if  the  default one is not
136          desired. Does not apply to EC2 jobs.
137
138
139
140       -ssh ssh-command
141
142          Specify an alternate ssh program to run in place of ssh , for  exam‐
143          ple  sftp or scp . Additional arguments are specified as ssh-command
144          . Since the arguments are delimited by spaces,  place  double  quote
145          marks  around the whole command, to prevent the shell from splitting
146          it into multiple arguments to condor_ssh_to_job . If  any  arguments
147          must  contain  spaces,  enclose  them within single quotes. Does not
148          apply to EC2 jobs.
149
150
151
152       -keygen-options ssh-keygen-options
153
154          Specify additional arguments to the ssh_keygen program, for creating
155          the  ssh key that is used for the duration of the session. For exam‐
156          ple, a different number of bits could be used, or  a  different  key
157          type than the default. Does not apply to EC2 jobs.
158
159
160
161       -shells shell1,shell2,...
162
163          Specify  a  comma-separated  list of shells to attempt to launch. If
164          the first shell does not exist on the remote machine, then the  fol‐
165          lowing  ones  in  the  list  will be tried. If none of the specified
166          shells can be found, /bin/sh is used by default. If this  option  is
167          not  specified,  it defaults to the environment variable  SHELL from
168          within the condor_ssh_to_job environment.  Does  not  apply  to  EC2
169          jobs.
170
171
172
173       -auto-retry
174
175          Specifies  that  if  the  job  is not yet running, condor_ssh_to_job
176          should keep trying periodically until it succeeds or encounters some
177          other error.
178
179
180
181       -remove-on-interrupt
182
183          If  specified,  attempt  to  remove  the  job from the queue if con‐
184          dor_ssh_to_job is interrupted via a CTRL-c or  otherwise  terminated
185          abnormally.
186
187
188
189       -X
190
191          Enable X11 forwarding. Does not apply to EC2 jobs.
192
193
194
195       -x
196
197          Disable X11 forwarding.
198
199
200

Examples

202       % condor_ssh_to_job 32.0
203       Welcome to slot2@tonic.cs.wisc.edu!
204       Your condor job is running with pid(s) 65881.
205       % gdb -p 65881
206       (gdb) where
207       % logout
208       Connection to condor-job.tonic.cs.wisc.edu closed.
209
210       To upload or download files interactively with sftp :
211
212       % condor_ssh_to_job -ssh sftp 32.0
213       Connecting to condor-job.tonic.cs.wisc.edu...
214       sftp> ls
215       sftp> get outputfile.dat
216
217       This  example  shows  downloading  a  file  from the job with scp . The
218       string "remote" is used in place of a host name in this example. It  is
219       not  necessary  to insert the correct remote host name, or even a valid
220       one, because the connection to the job is created automatically. There‐
221       fore, the placeholder string "remote" is perfectly fine.
222
223       % condor_ssh_to_job -ssh scp 32 remote:outputfile.dat .
224
225       This  example  uses condor_ssh_to_job to accomplish the task of running
226       rsync to synchronize a local file with a remote file in the job's work‐
227       ing  directory.  Job  id  32.0  is used in place of a host name in this
228       example. This causes rsync to insert the expected job id in  the  argu‐
229       ments to condor_ssh_to_job .
230
231       % rsync -v -e "condor_ssh_to_job" 32.0:outputfile.dat .
232
233       Note  that  condor_ssh_to_job  was added to HTCondor in version 7.3. If
234       one uses condor_ssh_to_job to connect to a job on  an  execute  machine
235       running  a  version  of HTCondor older than the 7.3 series, the command
236       will fail with the error message
237
238       Failed to send CREATE_JOB_OWNER_SEC_SESSION to starter
239

Exit Status

241       condor_ssh_to_job will exit with a non-zero status value if it fails to
242       set  up  an  ssh  session. If it succeeds, it will exit with the status
243       value of the remote command or shell.
244

Author

246       Center for High Throughput Computing, University of Wisconsin-Madison
247
249       Copyright (C) 1990-2018 Center for High Throughput Computing,  Computer
250       Sciences  Department, University of Wisconsin-Madison, Madison, WI. All
251       Rights Reserved. Licensed under the Apache License, Version 2.0.
252
253
254
255                                     date  just-man-pages/condor_ssh_to_job(1)
Impressum