Commands
The CLI performs operations in the user workspace context by default. Use the TOWER_WORKSPACE_ID
environment variable or the --workspace
parameter to specify an organization workspace ID.
Use the -h
or --help
parameter to list the available commands and their associated options.
For help with a specific subcommand, run the command with -h
or --help
appended. For example, tw credentials add google -h
.
Use tw --output=json <command>
to dump and store Seqera Platform entities in JSON format.
Use tw --output=json <command> | jq -r '.[].<key>'
to pipe the command to use jq to retrieve specific values in the JSON output. For example, tw --output=json workspaces list | jq -r '.workspaces[].orgId'
returns the organization ID for each workspace listed.
Credentials
To launch pipelines in a Platform workspace, you need credentials for:
- Compute environments
- Pipeline repository Git providers
- (Optional) Tower agent — used with HPC clusters
- (Optional) Container registries, such as docker.io
Add credentials
Run tw credentials add -h
to view a list of providers.
Run tw credentials add <provider> -h
to view the required fields for your provider.
You can add multiple credentials from the same provider in the same workspace.
Compute environment credentials
Platform requires credentials to access your cloud compute environments. See the compute environment page for your cloud provider for more information.
tw credentials add aws --name=my_aws_creds --access-key=<aws access key> --secret-key=<aws secret key>
New AWS credentials 'my_aws_creds (1sxCxvxfx8xnxdxGxQxqxH)' added at user workspace
Git credentials
Platform requires access credentials to interact with pipeline Git repositories. See Git integration for more information.
tw credentials add github -n=my_GH_creds -u=<GitHub username> -p=<GitHub access token>
New GITHUB credentials 'my_GH_creds (xxxxx3prfGlpxxxvR2xxxxo7ow)' added at user workspace
Container registry credentials
Configure credentials for the Nextflow Wave container service to authenticate to private and public container registries. See the Container registry credentials section under Credentials for registry-specific instructions.
Container registry credentials are only used by the Wave container service. See Wave containers for more information.
List credentials
tw credentials list
Credentials at user workspace:
ID | Provider | Name | Last activity
------------------------+-----------+------------------------------------+-------------------------------
1x1HxFxzxNxptxlx4xO7Gx | aws | my_aws_creds_1 | Wed, 6 Apr 2022 08:40:49 GMT
1sxCxvxfx8xnxdxGxQxqxH | aws | my_aws_creds_2 | Wed, 9 Apr 2022 08:40:49 GMT
2x7xNsf2xkxxUIxXKxsTCx | ssh | my_ssh_key | Thu, 8 Jul 2021 07:09:46 GMT
4xxxIeUx7xex1xqx1xxesk | github | my_github_cred | Wed, 22 Jun 2022 09:18:05 GMT
Delete credentials
tw credentials delete --name=my_aws_creds
Credentials '1sxCxvxfx8xnxdxGxQxqxH' deleted at user workspace
Compute environments
Compute environments define the execution platform where a pipeline runs. A compute environment is composed of the credentials, configuration, and storage options related to a particular computing platform. See Compute environments for more information on supported providers.
Run tw compute-envs -h
to view the list of supported compute environment operations.
Add a compute environment
Run tw compute-envs add -h
to view the list of supported providers.
Run tw compute-envs add <platform> -h
to view the required and optional fields for your provider.
You must add the credentials for your provider before creating your compute environment.
tw compute-envs add aws-batch forge --name=my_aws_ce \
--credentials=<my_aws_creds_1> --region=eu-west-1 --max-cpus=256 \
--work-dir=s3://<bucket name> --wait=AVAILABLE
New AWS-BATCH compute environment 'my_aws_ce' added at user workspace
This command will:
- Use Batch Forge to automatically manage the AWS Batch resource lifecycle (
forge
) - Use the credentials previously added to the workspace (
--credentials
) - Create the required AWS Batch resources in the AWS Ireland (
eu-west-1
) region - Provision a maximum of 256 CPUs in the compute environment (
--max-cpus
) - Use an existing S3 bucket to store the Nextflow work directory (
--work-dir
) - Wait until the compute environment has been successfully created and is ready to use (
--wait
)
See the compute environment page for your provider for detailed information on Batch Forge and manual compute environment creation.
Delete a compute environment
tw compute-envs delete --name=my_aws_ce
Compute environment '1sxCxvxfx8xnxdxGxQxqxH' deleted at user workspace
Default compute environment
Select a primary compute environment to be used by default in a workspace. You can override the workspace primary compute environment by explicitly specifying an alternative compute environment when you create or launch a pipeline.
tw compute-envs primary set --name=my_aws_ce
Primary compute environment for workspace 'user' was set to 'my_aws_ce (1sxCxvxfx8xnxdxGxQxqxH)'
Import and export a compute environment
Export the configuration details of a compute environment in JSON format for scripting and reproducibility purposes.
tw compute-envs export --name=my_aws_ce my_aws_ce_v1.json
Compute environment exported into 'my_aws_ce_v1.json'
Similarly, a compute environment can be imported to a workspace from a previously exported JSON file.
tw compute-envs import --name=my_aws_ce_v1 ./my_aws_ce_v1.json
New AWS-BATCH compute environment 'my_aws_ce_v1' added at user workspace
Datasets
Run tw datasets -h
to view the list of supported operations.
Datasets are CSV (comma-separated values) and TSV (tab-separated values) files stored in a workspace, used as inputs during pipeline execution. The most commonly used datasets for Nextflow pipelines are samplesheets, where each row consists of a sample, the location of files for that sample (such as FASTQ files), and other sample details.