caelus.run – CML Execution Utilities

Caelus Tasks Manager

class caelus.run.tasks.Tasks[source]

Bases: object

Caelus Tasks.

Tasks provides a simple automated workflow interface that provides various pre-defined actions via a YAML file interface.

The tasks are defined as methods with a cmd_ prefix and are automaticaly converted to task names. Users can create additional tasks by subclassing and adding additional methods with cmd_ prefix. These methods accept one argument options, a dictionary containing parameters provided by the user for that particular task.

cmd_clean_case(options)[source]

Clean a case directory

cmd_copy_files(options)[source]

Copy given file(s) to the destination.

cmd_copy_tree(options)[source]

Recursively copy a given directory to the destination.

cmd_exec_tasks(options)[source]

Execute another task file

cmd_process_logs(options)[source]

Process logs for a case

cmd_run_command(options)[source]

Execute a Caelus CML binary.

cmd_task_set(options)[source]

A subset of tasks for grouping

classmethod load(task_file='caelus_tasks.yaml', task_node='tasks')[source]

Load tasks from a YAML file.

If exedir is None then the execution directory is set to the directory where the tasks file is found.

Parameters:task_file (filename) – Path to the YAML file
case_dir = None

Directory where the tasks are to be executed

env = None

Caelus environment used when executing tasks

task_file = None

File that was used to load tasks

tasks = None

List of tasks that must be performed

class caelus.run.tasks.TasksMeta(name, bases, cdict)[source]

Bases: type

Process available tasks within each Tasks class.

TasksMeta is a metaclass that automates the process of creating a lookup table for tasks that have been implemented within the Tasks and any of its subclasses. Upon initialization of the class, it populates a class attribute task_map that contains a mapping between the task name (used in the tasks YAML file) and the corresponding method executed by the Tasks class executed.

CML Execution Utilities

caelus.run.core.clean_casedir(casedir, preserve_extra=None, preserve_zero=True, purge_mesh=False)[source]

Clean a Caelus case directory.

Cleans files generated by a run. By default, this function will always preserve system, constant, and 0 directories as well as any YAML or python files. Additional files and directories can be preserved by using the preserve_extra option that accepts a list of shell wildcard patterns of files/directories that must be preserved.

Parameters:
  • casedir (path) – Absolute path to a case directory.
  • preserve_extra (list) – List of shell wildcard patterns to preserve
  • purge_mesh (bool) – If true, also removes mesh from constant/polyMesh
  • preserve_zero (bool) – If False, removes the 0 directory
Raises:

IOErrorclean_casedir will refuse to remove files from a directory that is not a valid Caelus case directory.

caelus.run.core.clean_polymesh(casedir, region=None, preserve_patterns=None)[source]

Clean the polyMesh from the given case directory.

Parameters:
  • casedir (path) – Path to the case directory
  • region (str) – Mesh region to delete
  • preserve_patterns (list) – Shell wildcard patterns of files to preserve
caelus.run.core.clone_case(casedir, template_dir, copy_polymesh=True, copy_zero=True, copy_scripts=True, extra_patterns=None)[source]

Clone a Caelus case directory.

Parameters:
  • casedir (path) – Absolute path to new case directory.
  • template_dir (path) – Case directory to be cloned
  • copy_polymesh (bool) – Copy contents of constant/polyMesh to new case
  • copy_zero (bool) – Copy time=0 directory to new case
  • copy_scripts (bool) – Copy python and YAML files
  • extra_patterns (list) – List of shell wildcard patterns for copying
Returns:

Absolute path to the newly cloned directory

Return type:

path

Raises:

IOError – If either the casedir exists or if the template_dir does not exist or is not a valid Caelus case directory.

caelus.run.core.find_caelus_recipe_dirs(basedir, action_file='caelus_tasks.yaml')[source]

Return case directories that contain action files.

A case directory with action file is determined if the directory succeeds checks in is_caelus_dir() and also contains the action file specified by the user.

Parameters:
  • basedir (path) – Top-level directory to traverse
  • action_file (filename) – Default is caelus_tasks.yaml
Yields:

Path to the case directory with action files

caelus.run.core.find_case_dirs(basedir)[source]

Recursively search for case directories existing in a path.

Parameters:basedir (path) – Top-level directory to traverse
Yields:Absolute path to the case directory
caelus.run.core.find_recipe_dirs(basedir, action_file='caelus_tasks.yaml')[source]

Return directories that contain the action files

This behaves differently than find_caelus_recipe_dirs() in that it doesn’t require a valid case directory. It assumes that the case directories are sub-directories and this task file acts on multiple directories.

Parameters:
  • basedir (path) – Top-level directory to traverse
  • action_file (filename) – Default is caelus_tasks.yaml
Yields:

Path to the case directory with action files

caelus.run.core.get_mpi_size(casedir)[source]

Determine the number of MPI ranks to run

caelus.run.core.is_caelus_casedir(root=None)[source]

Check if the path provided looks like a case directory.

A directory is determined to be an OpenFOAM/Caelus case directory if the system, constant, and system/controlDict exist. No check is performed to determine whether the case directory will actually run or if a mesh is present.

Parameters:root (path) – Top directory to start traversing (default: CWD)

Job Scheduler Interface

This module provides a unified interface to submitting serial, local-MPI parallel, and parallel jobs on high-performance computing (HPC) queues.

class caelus.run.hpc_queue.HPCQueue(name, cml_env=None, **kwargs)[source]

Abstract base class for job submission interface

name

str – Job name

queue

str – Queue/partition where job is submitted

account

str – Account the job is charged to

num_nodes

int – Number of nodes requested

num_ranks

int – Number of MPI ranks

stdout

path – Filename where standard out is redirected

stderr

path – Filename where standard error is redirected

join_outputs

bool – Merge stdout/stderr to same file

mail_opts

str – Mail options (see specific queue implementation)

email_address

str – Email address for notifications

qos

str – Quality of service

time_limit

str – Wall clock time limit

shell

str – shell to use for scripts

mpi_extra_args

str – additional arguments for MPI

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete a job from the queue

get_queue_settings()[source]

Return a string with all the necessary queue options

static is_job_scheduler()[source]

Is this a job scheduler

static is_parallel()[source]

Flag indicating whether the queue type can support parallel runs

prepare_mpi_cmd()[source]

Prepare the MPI invocation

process_run_env()[source]

Populate the run variables for script

classmethod submit(script_file, job_dependencies=None, extra_args=None, dep_type=None)[source]

Submit the job to the queue

update(settings)[source]

Update queue settings from the given dictionary

write_script(script_name=None)[source]

Write a submission script using the arguments provided

Parameters:script_name (path) – Name of the script file
queue_name = '_ERROR_'

Identifier used for queue

script_body

The contents of the script submitted to scheduler

class caelus.run.hpc_queue.PBSQueue(name, cml_env=None, **kwargs)[source]

PBS Queue Interface

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete the PBS batch job using job ID

get_queue_settings()[source]

Return all PBS options suitable for embedding in script

classmethod submit(script_file, job_dependencies=None, extra_args=None, dep_type='afterok')[source]

Submit a PBS job using qsub command

job_dependencies is a list of PBS job IDs. The submitted job will run depending the status of the dependencies.

extra_args is a dictionary of arguments passed to qsub command.

The job ID returned by this method can be used as an argument to delete method or as an entry in job_dependencies for a subsequent job submission.

Parameters:
  • script_file (path) – Script provided to sbatch command
  • job_dependencies (list) – List of jobs to wait for
  • extra_args (dict) – Extra SLURM arguments
Returns:

Job ID as a string

Return type:

str

class caelus.run.hpc_queue.ParallelJob(name, cml_env=None, **kwargs)[source]

Interface to a parallel job

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static is_parallel()[source]

Flag indicating whether the queue type can support parallel runs

prepare_mpi_cmd()[source]

Prepare the MPI invocation

class caelus.run.hpc_queue.SerialJob(name, cml_env=None, **kwargs)[source]

Interface to a serial job

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete a job from the queue

get_queue_settings()[source]

Return queue settings

static is_job_scheduler()[source]

Flag indicating whether this is a job scheduler

static is_parallel()[source]

Flag indicating whether the queue type can support parallel runs

prepare_mpi_cmd()[source]

Prepare the MPI invocation

classmethod submit(script_file, job_dependencies=None, extra_args=None)[source]

Submit the job to the queue

class caelus.run.hpc_queue.SlurmQueue(name, cml_env=None, **kwargs)[source]

Interface to SLURM queue manager

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete the SLURM batch job using job ID

get_queue_settings()[source]

Return all SBATCH options suitable for embedding in script

prepare_srun_cmd()[source]

Prepare the call to SLURM srun command

classmethod submit(script_file, job_dependencies=None, extra_args=None, dep_type='afterok')[source]

Submit to SLURM using sbatch command

job_dependencies is a list of SLURM job IDs. The submitted job will not run until after all the jobs provided in this list have been completed successfully.

extra_args is a dictionary of extra arguments to be passed to sbatch command. Note that this can override options provided in the script file as well as introduce additional options during submission.

dep_type can be one of: after, afterok, afternotok afterany

The job ID returned by this method can be used as an argument to delete method or as an entry in job_dependencies for a subsequent job submission.

Parameters:
  • script_file (path) – Script provided to sbatch command
  • job_dependencies (list) – List of jobs to wait for
  • extra_args (dict) – Extra SLURM arguments
  • dep_type (str) – Dependency type
Returns:

Job ID as a string

Return type:

str

caelus.run.hpc_queue.caelus_execute(cmd, env=None, stdout=<open file '<stdout>', mode 'w'>, stderr=<open file '<stderr>', mode 'w'>)[source]

Execute a CML command with the right environment setup

A wrapper around subprocess.Popen to set up the correct environment before invoing the CML executable.

The command can either be a string or a list of arguments as appropriate for Caelus executables.

Examples

caelus_execute(“blockMesh -help”)

Parameters:
  • cmd (str or list) – The command to be executed
  • env (CMLEnv) – An instance representing the CML installation (default: latest)
  • stdout – A file handle where standard output is redirected
  • stderr – A file handle where standard error is redirected
Returns:

The task instance

Return type:

subprocess.Popen

caelus.run.hpc_queue.get_job_scheduler(queue_type=None)[source]

Return an instance of the job scheduler