SSH

It is easy to set up Dask on informally managed networks of machines using SSH. This can be done manually using SSH and the Dask command line interface, or automatically using either the dask.distributed.SSHCluster Python cluster manager or the dask-ssh command line tool. This document describes both of these options.

Note

Before instaniating a SSHCluster it is recommended to configure keyless SSH for your local machine and other machines. For example, on a Mac to SSH into localhost (local machine) you need to ensure the Remote Login option is set in System Preferences -> Sharing. In addition, id_rsa.pub should be in authorized_keys for keyless login.

Python Interface

dask.distributed.SSHCluster(hosts: list[str] | None = None, connect_options: dict | list[dict] | None = None, worker_options: dict | None = None, scheduler_options: dict | None = None, worker_module: str = 'deprecated', worker_class: str = 'distributed.Nanny', remote_python: str | list[str] | None = None, **kwargs: Any) distributed.deploy.spec.SpecCluster[source]

Deploy a Dask cluster using SSH

The SSHCluster function deploys a Dask Scheduler and Workers for you on a set of machine addresses that you provide. The first address will be used for the scheduler while the rest will be used for the workers (feel free to repeat the first hostname if you want to have the scheduler and worker co-habitate one machine.)

You may configure the scheduler and workers by passing scheduler_options and worker_options dictionary keywords. See the dask.distributed.Scheduler and dask.distributed.Worker classes for details on the available options, but the defaults should work in most situations.

You may configure your use of SSH itself using the connect_options keyword, which passes values to the asyncssh.connect function. For more information on these see the documentation for the asyncssh library https://asyncssh.readthedocs.io .

Parameters
hosts

List of hostnames or addresses on which to launch our cluster. The first will be used for the scheduler and the rest for workers.

connect_options

Keywords to pass through to asyncssh.connect(). This could include things such as port, username, password or known_hosts. See docs for asyncssh.connect() and asyncssh.SSHClientConnectionOptions for full information. If a list it must have the same length as hosts.

worker_options

Keywords to pass on to workers.

scheduler_options

Keywords to pass on to scheduler.

worker_class

The python class to use to create the worker(s).

remote_python

Path to Python on remote nodes.

See also

dask.distributed.Scheduler
dask.distributed.Worker
asyncssh.connect

Examples

Create a cluster with one worker:

>>> from dask.distributed import Client, SSHCluster
>>> cluster = SSHCluster(["localhost", "localhost"])
>>> client = Client(cluster)

Create a cluster with three workers, each with two threads and host the dashdoard on port 8797:

>>> from dask.distributed import Client, SSHCluster
>>> cluster = SSHCluster(
...     ["localhost", "localhost", "localhost", "localhost"],
...     connect_options={"known_hosts": None},
...     worker_options={"nthreads": 2},
...     scheduler_options={"port": 0, "dashboard_address": ":8797"}
... )
>>> client = Client(cluster)

Create a cluster with two workers on each host:

>>> from dask.distributed import Client, SSHCluster
>>> cluster = SSHCluster(
...     ["localhost", "localhost", "localhost", "localhost"],
...     connect_options={"known_hosts": None},
...     worker_options={"nthreads": 2, "n_workers": 2},
...     scheduler_options={"port": 0, "dashboard_address": ":8797"}
... )
>>> client = Client(cluster)

An example using a different worker class, in particular the CUDAWorker from the dask-cuda project:

>>> from dask.distributed import Client, SSHCluster
>>> cluster = SSHCluster(
...     ["localhost", "hostwithgpus", "anothergpuhost"],
...     connect_options={"known_hosts": None},
...     scheduler_options={"port": 0, "dashboard_address": ":8797"},
...     worker_class="dask_cuda.CUDAWorker")
>>> client = Client(cluster)

Command Line

The convenience script dask-ssh opens several SSH connections to your target computers and initializes the network accordingly. You can give it a list of hostnames or IP addresses:

$ dask-ssh 192.168.0.1 192.168.0.2 192.168.0.3 192.168.0.4

Or you can use normal UNIX grouping:

$ dask-ssh 192.168.0.{1,2,3,4}

Or you can specify a hostfile that includes a list of hosts:

$ cat hostfile.txt
192.168.0.1
192.168.0.2
192.168.0.3
192.168.0.4

$ dask-ssh --hostfile hostfile.txt

Note

The command line documentation here may differ depending on your installed version. We recommend referring to the output of dask-ssh --help.

dask-ssh

Launch a Dask cluster over SSH. A ‘dask scheduler’ process will run on the first host specified in [HOSTNAMES] or in the hostfile, unless –scheduler is specified explicitly. One or more ‘dask worker’ processes will be run on each host. Use the flag –nworkers to adjust how many dask worker process are run on each host and the flag –nthreads to adjust how many CPUs are used by each dask worker process.

dask-ssh [OPTIONS] [HOSTNAMES]...

Options

--scheduler <scheduler>

Specify scheduler node. Defaults to first address.

--scheduler-port <scheduler_port>

Specify scheduler port number.

Default

8786

--nthreads <nthreads>

Number of threads per worker process. Defaults to number of cores divided by the number of processes per host.

--nworkers <n_workers>

Number of worker processes per host.

Default

1

--hostfile <hostfile>

Textfile with hostnames/IP addresses

--ssh-username <ssh_username>

Username to use when establishing SSH connections.

--ssh-port <ssh_port>

Port to use for SSH connections.

Default

22

--ssh-private-key <ssh_private_key>

Private key file to use for SSH connections.

--nohost

Do not pass the hostname to the worker.

--log-directory <log_directory>

Directory to use on all cluster nodes for the output of dask scheduler and dask worker commands.

--local-directory <local_directory>

Directory to use on all cluster nodes to place workers files.

--remote-python <remote_python>

Path to Python on remote nodes.

--memory-limit <memory_limit>

Bytes of memory that the worker can use. This can be an integer (bytes), float (fraction of total system memory), string (like 5GB or 5000M), ‘auto’, or zero for no memory management

Default

'auto'

--worker-port <worker_port>

Serving computation port, defaults to random

--nanny-port <nanny_port>

Serving nanny port, defaults to random

--remote-dask-worker <remote_dask_worker>

Worker to run.

Default

'distributed.cli.dask_worker'

--version

Show the version and exit.

Arguments

HOSTNAMES

Optional argument(s)