OpenSSH/Cookbook/Remote Processes

One of the main functions of OpenSSH is that of accessing and running programs on other systems. That is, after all, one of the main purposes of the program. There are several ways to expand upon that, either interactively or as part of unattended scripts. So in addition to an interactive login, ssh(1) can be used to simply execute a program or script. Logout is automatic when the program or script has run its course. Some combinations are readily obvious. Others require more careful planning. Sometimes it is enough of a clue just to know that something can be done, at other times more detail is required. A number of examples of useful combinations of using OpenSSH to run remote tasks follow.

Run a Remote Process
An obvious use of ssh(1) is to run a program on the remote system and then exit. Often this is a shell, but it can be any program available to the account. For feedback, ssh(1) passes whatever exit value was returned by the remote process. When a remote process is completed, ssh(1) will terminate and pass on the exit value of the last remote process to complete. So in this way it can be used in scripts and the outcome of the remote processes can be used by the client system.

The following will run true(1) on the remote system and return success, exit code 0, to the local system where ssh(1) was run.

The following will run false(1) on the remote system and return failure, exit code 1, to the local system where ssh(1) was run.

If any other values, from 0 to 255, were returned, ssh(1) will pass them back the local host from the remote host.

Run a Remote Process and Capture Output Locally
Output from programs run on the remote machine can be saved locally using a normal redirect. Here we run dmesg(8) on the remote machine:

Interactive processes will be difficult or impossible to operate in that manner because no output will be seen. For interactive processes requiring any user input, output can be piped through tee(1) instead  to send the output both to the file and to stdout. This example runs an anonymous FTP session remotely and logs the output locally.

It may be necessary to force pseudo-TTY allocation to get both input and output to be properly visible.

The simplest way to read data on one machine and process it on another is to use pipes.

Run a Local Process and Capture Remote Data
Data can be produced on one system and used on another. This is different than tunneling X, where both the program and the data reside on the other machine and only the graphical interface is displayed locally. Again, the simplest way to read data on one machine and use it on another is to use pipes.

In the case where the local program expects to read a file from the remote machine, a named pipe can be used in conjunction with a redirect to transfer the data. In the following example, a named pipe is created as a transfer point for the data. Then ssh(1) is used to launch a remote process which sends output to stdout which is captured by a redirection on the local machine and sent to the named pipe so a local program can access the data via the named pipe.

In this particular example, it is important to add a filter rule to tcpdump(8) itself to prevent an infinite feedback loop if ssh(1) is connecting over the same interface as the data being collected. This loop is prevented by excluding either the SSH port, the host used by the SSH connection, or the corresponding network interface.

Any sudo(8) privileges for tcpdump(8) also need to operate without an interactive password, so great care and precision must be exercised to spell out in /etc/sudoers exactly which program and parameters are to be permitted and nothing more. The authentication for ssh(1) must also occur non-interactively, such as with a key and key agent. Once the configurations are set, ssh(1) is run and sent to the background after connecting. With ssh(1) in the background the local application is launched, in this case wireshark(1), a graphical network analyzer, which is set to read the named pipe as input.

On some systems, process substitution can be used to simplify the transfer of data between the two machines. Doing process substitution requires only a single line.

However, process substitution is not POSIX compliant and thus not portable across platforms. It is limited to bash(1) only and not present in other shells. So, for portability, use a named pipe.

Run a Remote Process While Either Connected or Disconnected
There are several different ways to leave a process running on the remote machine. If the intent is to come back to the process and check on it periodically then a terminal multiplexer is probably the best choice. For simpler needs there are other approaches.

Run a Remote Process in the Background While Disconnected
Many routine tasks can be set in motion and then left to complete on their own without needing to stay logged in. When running remote process in background it is useful to spawn a shell just for that task.

Another way is to use a terminal multiplexer. An advantage with them is being able to reconnect and follow the progress from time to time, or simply to resume work in progress when a connection is interrupted such as when traveling. Here tmux(1) reattaches to an existing session or else, if there is none, then creates a new one.

On older systems, screen(1) is often available. Here it is launched remotely to create a new session if one does not exists, or re-attach to a session if one is already running. So if no screen(1) session is running, one will be created.

Once a screen(1) session is running, it is possible to detach it and close the SSH connection without disturbing the background processes it may be running. That can be particularly useful when hosting certain game servers on a remote machine. Then the terminal session can then be reattached in progress with the same two options.

There is more on using terminal multiplexers tmux(1) or on  below. In some environments it might be necessary to also use pagsh(1), especially with Kerberos, see below. Or nohup(1) might be of use.

Keeping Authentication Tickets for a Remote Process After Disconnecting
Authentication credentials are often deleted upon logout and thus any remaining processes no longer have access to whatever the authentication tokens were used for. In such cases, it is necessary to first create a new credential cache sandbox to run an independent process in before disconnecting.

Kerberos and AFS are two examples of services that require valid, active tickets. Using pagsh(1) is one solution for those environments.

Automatically Reconnect and Restore an SSH Session Using tmux(1) or screen(1)
Active, running sessions can be restored after either an intentional or accidental break by using a terminal multiplexer. Here ssh(1) is to assume that the connection is broken after 15 seconds (three tries of five seconds each) of not being able to reach the server and to exit. Then the tmux(1) session is reattached or, if absent, created.

Then each time ssh(1) exits, the shell tries to connect with it again and when that happens to look for a tmux(1) session to attach to. That way if the TCP or SSH connections are broken, none of the applications or sessions stop running inside the terminal multiplexer. Here is an example for screen(1) on older systems.

The above examples give only an overly simplistic demonstration where at their most basic they are useful to resume a shell where it was after the TCP connection was broken. Both tmux(1) and screen(1) are quite capable of much more and worth exploring especially for travelers and telecommuters.

See also the section on "Public Key Authentication" to integrate keys into the process of automatically reconnecting.

Sharing a Remote Shell
Teaching, team programming, supervision, and creating documentation are some examples of when it can be useful for two people to share a shell. There are several options for read-only viewing as well as for multiple parties being able to read and write.

Read-only Monitoring or Logging
Pipes and redirects are a quick way to save output from an SSH session or to allow additional users to follow along read-only.

One sample use-case is when a router needs to be reconfigured and is available via serial console. Say the router is down and a consultant must log in via another user's laptop's connection to access the router's serial console and it is necessary to supervise what is done or help at certain stages. It is also very useful in documenting various activities, including configuration or installation.

Read-only Using tee(1)
Capture shell activity to a log file and optionally use tail to watch it real time. The utility tee(1), like a t-joint in plumbing, is used here to send output to two destinations, both stdout and a file.

The resulting file can be monitored live in another terminal using tail(1) or after the fact with a pager like less(1).

Force Serial Session with Remote Logging Using tee(1)
The tee(1) utility can capture output from any program that can write to stdout. It is very useful for walking someone at a remote site through a process, supervising, or building documentation.

This example uses chroot(8) to keep the choice of actions as limited as possible. Actually building the chroot jail is a separate task. Once built, the guest user is made a member of the group 'consult'. The serial connection for the test is on device ttyUSB0, which is a USB to serial converter and cu(1) is used for the connection. tee(1) takes the output from cu(1) and saves a copy to a file for logging while the program is used. The following is would go in sshd_config(5)

With that one or more people can follow the activity in cu(1) as it happens using tail(1) by pointing it at the log file on the remote server.

Or the log can be edited and used for documentation. It is also possible for advanced users of tmux(1) or screen(1) to allow read-only observers.

Scripting
It is possible to automate some of the connection. Make a script, such as /usr/local/bin/screeners, then use that script with the ForceCommand directive. Here is an example of a script that tries to reconnect to an existing session. If no sessions already exist, then a new one is created and automatically establishes a connection to a serial device.

Interactive Sharing Using a Terminal Multiplexer
The terminal multiplexers tmux(1) or screen(1) can attach two or more people to the same session. The session can either be read-only for some or read-write for all participants.

tmux(1)
If the same account is going to be sharing the session, then it's rather easy. In the first terminal, start tmux(1) where 'sessionname' is the session name:

Then in the second terminal:

That's all that's needed if the same account is logged in from different locations and will share a session. For different users, you have to set the permissions on the tmux(1) socket so that both users can read and write it. That will first require a group which has both users as members.

Then after both accounts are in the shared group, in the first terminal, the one with the main account, start tmux(1) as before but also assign a name for the session's socket. Here 'sessionname' is the session name and 'sharedsocket' is the name of the socket:

Then change the group of the socket and the socket's directory to a group that both users share in common. Make sure that the socket permissions allow the group to write the socket. In this example the shared group is 'foo' and the socket is /tmp/shareddir/sharedsocket.

Finally, have the second account log in attach to the designated session using the shared socket.

At that point, either account will be able to both read and write to the same session.

screen(1)
If the same account is going to share a screen(1) session, then it's an easy procedure. In the one terminal, start a new session and assign a name to it. In this example, 'sessionname' is the name of the session:

In the other terminal, attach to that session:

If two different accounts are going to share the same screen(1) session, then the following extra steps are necessary. The first user does this when initiating the session:

Then the second user does this:

In screen(1), if more than one user account is used the  command can remove write access for the other user:. Note that screen(1) must run as SUID for multiuser support. If it is not set, you will get an error message reminding you when trying to connect the second user. You might also have to set permissions for /var/run/screen to 755.

Display Remote Graphical Programs Locally Using X11 Forwarding
It is possible to run graphical programs on the remote machine and have them displayed locally by forwarding X11, the current implementation of the X Window system. X11 is used to provide the graphical interface on many systems. See the website www.X.org for its history and technical details. It is built into most desktop operating systems. It is even distributed as part of Macintosh OS X, though there it is not the default method of graphical display.

X11 forwarding is off by default and must be enabled on both the SSH client and server if it is to be used.

X11 also uses a client server architecture and the X server is the part that does the actual display for the end user while the various programs act as the clients and to the server. Thus by putting the client and server on different machines and forwarding the X11 connections, it is possible to run programs on other computers but have them displayed and available as if they were on the user's computer.

A note of caution is warranted. Allowing the remote machine to forward X11 connections will allow it and its applications to access many devices and resources on the machine hosting the X server. Regardless of the intent of the users, these are the devices and other resources accessible to the user account. So forwarding should only be done when the other machine, that user account, and its applications are reliable.

On the server side, to enable X11 forwarding by default, put the line below in sshd_config(5), either in the main block or a Match block:

On the client side, forwarding of X11 is also off by default, but can be enabled using three different ways. It can be enabled in ssh_config(5) or else using either the -X or -Y run-time arguments.

The connection will be slow, however. If responsiveness is a factor, it may be relevant to consider a SOCKS proxy instead or some other technology all together like FreeNX.

Using ssh_config(5) to Specify X11 Forwarding
X11 forwarding can be enabled in /etc/ssh_config for all accounts for all outgoing SSH connections or for just specific hosts by configuring ssh_config(5).

It is possible to apply ssh_config(5) settings to just one account in ~/.ssh/config to limit forwarding by default to an individual host by hostname or IP number.

And here it is enabled for a specific machine by IP number

Likewise, use limited pattern matching to allow forwarding for a subdomain or a range of IP addresses. Here it is enabled for any host in the pool.example.org domain, any host from 192.168.100.100 to 192.168.100.109, and any host from 192.168.123.1 through 192.168.123.254:

Again, X11 is built-in to most desktop systems. There is an optional add-on for OS X which has its roots in NextStep. X11 support may be missing from some particularly outdated, legacy platforms, however. But even there it is often possible to retrofit them using the right tools, one example being the tool Xming.

Unprivileged sshd(8) Service
It is possible to run sshd(8) on a high port using an unprivileged account. It will not be able to fork to other accounts, so login will be only possible on the same account which is running the unprivileged service.

There are three general steps to running an unprivileged SSH service on a normal, unprivileged account.

1) Prior to launching the SSH daemon under an unprivileged account, some unprivileged host keys must be created. These keys will be used to identify this service to connecting clients, especially on return visits.  Their creation only needs to be done once per account and the passphrase must be left empty.

If needed, do likewise for ECDSA and RSA keys. Be sure that that the permissions are correct. The directory which the keys are in and its parents should not be writable by other accounts and the private SSH host key itself must not be readable by any other accounts.

2) When the keys are in place, test them by starting the SSH server manually. One ways is by using the -h option to point to these alternative host key files.  Note that because the account is unprivileged only the ports from 1024 on up are available, in this example port 2222 is set using the -p option.

Verify that the new service is accessible from another system. Especially make sure that any packet filters, if they are any along the way, are set to pass the right port. Incoming connections, and any packet filters, are going to have to take the alternative port into consideration in normal uses. Another approach would be to add the unprivileged SSH service as an onion service. See the   chapter on Proxies and Jump Hosts about that.

Note that the above examples use the default configuration file for the daemon. If a different configuration file is needed for special settings, then add the -f option to the formula.

Specific configuration directives can be set for the unprivileged account by using an alternative configuration file. Unprivileged host keys can be identified in that file along with a designated alternative listening port and many other customizations. It might also be helpful to log the unprivileged service separately from any other SSH services already on the same system by changing the SyslogFacility directive to something other than AUTH which is the default.

3) Automate the startup, if that is a goal.

The -D option keeps the process from detaching and becoming a daemon. During a test that might be useful, but later in actual use it might not be, it depends on how the process is launched.

What Unprivileged sshd(8) Processes Look Like
Because an unprivileged account is being used, privilege separation will not occur and all the child process will run in the same account as the original sshd(8) process. Here's how that can look while listening on port 2222 for an incoming connection using an unprivileged account, using the default configuration file with a few overrides:

Then once someone has connected but not finished logging in yet, it can look like this. A monitor is forked but even though it is labeled [priv] it is not privileged, and instead still running in the same unprivileged account as the parent process. It in turn forks a child, here process 1998150, to manage the authentication:

Finally, once login has been successful, that unprivileged process is dropped and a new unprivileged process spun up to manage the interactive session.

Above, process 1998410 is managing the interactive session.

Locking Down a Restricted Shell
A restricted shell sets up a more controlled environment than what is normally provided by a standard interactive shell. Though it behaves almost identically to a standard shell, it has many exceptions regarding capabilities that are whitelisted, leaving the others disabled. The restrictions include, but are not limited to, the following:


 * The SHELL, ENV, and PATH variables cannot be changed.
 * Programs can't be run with absolute or relative paths.
 * Redirections that create files can't be used (specifically >, >|, >>, <>).

Common high-end shells like bash(1), ksh(1), and zsh(1) all can be launched in restricted mode. See the manual pages for the individual shells for the specifics of how to invoke restrictions.

Even with said restrictions, there are several ways by which it is trivial to escape from a restricted shell: If normal shells are available anywhere in the path, they can be launched instead. If regular programs in the available path provide shell escapes to full shells, they too can be used. Finally, if sshd(8) is configured to allow arbitrary programs to be run independently of the shell, a full shell can be launched instead. So there's more to safely using restricted shells than just setting the account's shell to  and calling it a day. Several steps are needed to make that as difficult as possible to escape the restrictions, especially over SSH.

(The following steps assume familiarity with the appropriate system administration tools and their use. Their selection and use are not covered here.)

First, create a directory containing a handful of symbolic links that point to white-listed programs. The links point to the programs that the account should be able to run when the directory is added to the PATH environment variable. These programs should have no shell escape capabilities and, obviously, they should not themselves be unrestricted shells.

If you want to prevent exploration of the system at large, remember to also lock the user into a chroot or jail. Even without programs like ls(1) and cat(1), exploration is still possible (see below: "ways to explore without ls(1) and cat(1)").

Symbolic links are used because the originals are, hopefully, maintained by package management software and should not be moved. Hard links cannot be used if the original and whitelisted directories are in separate filesystems. Hard links are necessary if you set up a chroot or jail that excludes the originals.

Next, create a minimal .profile for that account. Set its owner to 'root'. Do this for its parent directory too (which is the user's home directory). Then, allow the account's own group to read both the file and the directory.

Next, create a group for the locked down account(s) and populate it. Here the account is in the group games and will be restricted through its membership in that group.

Next, lock down SSH access for that group or account through use of a ForceCommand directive in the server configuration and apply it to the selected group. This is necessary to prevent trivial circumvention through the SSH client by calling a shell directly, such as with  or similar. Remember to disable forwarding if it is not needed. For example, the following can be appended to sshd_config(5) so that any account in the group 'games' gets a restricted shell no matter what they try with the SSH client.

Note that the restricted shell is invoked with the -l option by the ForceCommand so that it will be a login shell that reads and executes the contents of /etc/profile and $HOME/.profile if they exist and are readable. This is necessary to set the custom PATH environment variable. Again, be sure that $HOME/.profile is not in any way editable or overwritable by the restricted account. Also note that this disables SFTP access by that account, which prevents quite a bit of additional mischief.

Last but not least, set the account's login shell to the restricted shell. Include the full path to the restricted shell. It might also be necessary to add it to the list of approved shells found in /etc/shells first.