caelus.run – CML Execution Utilities

Caelus Tasks Manager

class caelus.run.tasks.Tasks[source]

Bases: object

Caelus Tasks.

Tasks provides a simple automated workflow interface that provides various pre-defined actions via a YAML file interface.

The tasks are defined as methods with a cmd_ prefix and are automaticaly converted to task names. Users can create additional tasks by subclassing and adding additional methods with cmd_ prefix. These methods accept one argument options, a dictionary containing parameters provided by the user for that particular task.

cmd_change_inputs(options)[source]

Change input files in case directory

cmd_clean_case(options)[source]

Clean a case directory

cmd_copy_files(options)[source]

Copy given file(s) to the destination.

cmd_copy_tree(options)[source]

Recursively copy a given directory to the destination.

cmd_exec_tasks(options)[source]

Execute another task file

cmd_process_logs(options)[source]

Process logs for a case

cmd_run_command(options)[source]

Execute a Caelus CML binary.

This method is an interface to CaelusCmd

cmd_run_python(options)[source]

Execute a python script

cmd_task_set(options)[source]

A subset of tasks for grouping

classmethod load(task_file='caelus_tasks.yaml', task_node='tasks')[source]

Load tasks from a YAML file.

If exedir is None then the execution directory is set to the directory where the tasks file is found.

Parameters:task_file (filename) – Path to the YAML file
case_dir = None

Directory where the tasks are to be executed

env = None

Caelus environment used when executing tasks

task_file = None

File that was used to load tasks

tasks = None

List of tasks that must be performed

class caelus.run.tasks.TasksMeta(name, bases, cdict)[source]

Bases: type

Process available tasks within each Tasks class.

TasksMeta is a metaclass that automates the process of creating a lookup table for tasks that have been implemented within the Tasks and any of its subclasses. Upon initialization of the class, it populates a class attribute task_map that contains a mapping between the task name (used in the tasks YAML file) and the corresponding method executed by the Tasks class executed.

CML Simulation

This module defines CMLSimulation that provides a pythonic interface to detail with a CML case directory. In addition to implementing methods to perform actions, it also tracks the state of the analysis at any given time.

The module also provides an abstract interface CMLSimCollection that provides basic infrastructure to manage and manipulate a collection of simulations as a group.

class caelus.run.case.CMLSimCollection(name, env=None, basedir=None)[source]

Bases: caelus.utils.tojson.JSONSerializer

Interface representing a collection of cases

Implementations must implement setup() that provides a concrete implementation of how the case is setup (either from a template or otherwise).

Provides prep(), solve(), post(), and status() to interact with the collection as a whole. Prep, solve, and post can accept a list of shell-style wildcard patterns that will restrict the actions to matching cases only.

Parameters:
  • name (str) – Unique name for this parametric run
  • env (CMLEnv) – CML excution environment
  • basedir (path) – Path where analysis directory is created
filter_cases(patterns)[source]

Filter the cases based on a list of patterns

The patterns are shell-style wildcard strings to match case directory names.

Parameters:patterns (list) – A list of one or more patterns
classmethod load(env=None, casedir=None, json_file=None)[source]

Reload a persisted analysis group

Parameters:
  • env (CMLEnv) – Environment for the analysis
  • casedir (path) – Path to the case directory
  • json_file (filename) – Persistence information
post(cnames=None, force=False)[source]

Run post-processing tasks on the cases

Parameters:
  • cnames (list) – Shell-style wildcard patterns
  • force (bool) – Force rerun
prep(cnames=None, force=False)[source]

Run prep actions on the cases

Parameters:
  • cnames (list) – Shell-style wildcard patterns
  • force (bool) – Force rerun
save_state(**kwargs)[source]

Dump persistence file in JSON format

setup()[source]

Logic to set up the analysis

classmethod simulation_class()[source]

Concrete instance of a Simulation

Default is CMLSimulation

solve(cnames=None, force=False)[source]

Run solve actions on the cases

Parameters:
  • cnames (list) – Shell-style wildcard patterns
  • force (bool) – Force rerun
status()[source]

Return the status of the runs

Yields:tuple – (name, status) for each case
classmethod udf_instance(custom_script=None, udf_params=None)[source]

Return a UDF instance

basedir = None

Location where parametric run setup is located

case_names = None

Names of cases

casedir = None

Location of the parametric run

cases = None

List of CMLSimulation instances

env = None

CML execution environment

name = None

Unique name for this parametric collection of cases

udf = None

UDF function

udf_params

Return the parameters for UDF script

udf_script

Return the UDF script

class caelus.run.case.CMLSimMeta(name, bases, cdict)[source]

Bases: type

Decorator to add dictfile accessors to CMLSimulation

add_dictfile_attrs(attrmap)[source]

Create getters for dictionary file objects

process_attr(key, value)[source]

Create the attribute

class caelus.run.case.CMLSimulation(case_name, cml_env=None, basedir=None, parent=None)[source]

Bases: caelus.utils.tojson.JSONSerializer

Pythonic interface to CML/OpenFOAM simulation

This class defines the notion of an analysis. It provides methods to interact with an analysis directory from within python, and provides basic infrastructure to track the status of the simulation.

After successful setup(), the simulation moves through a series of stages, that can be queried via status() method:

Status Description
Setup Case setup successfully
Prepped Pre-processing completed
Submitted Solver initialized
Running Solver is running
Solved Solve has completed
DONE Post-processing completed
FAILED Some action failed
Parameters:
  • case_name (str) – Unique identifier for the case
  • env (CMLEnv) – CML environment used to setup/run the case
  • basedir (path) – Location where the case is located/created
  • parent (CMLSimCollection) – Instance of the group manager
case_log(force_reload=False)[source]

Return a SolverLog instance for this case

clean(preserve_extra=None, preserve_polymesh=True, preserve_zero=True, preserve_times=False, preserve_processors=False)[source]

Clean an existing case directory.

Parameters:
  • preserve_extra (list) – List of shell wildcard patterns to preserve
  • preserve_polymesh (bool) – If False, purges polyMesh directory
  • preserve_zero (bool) – If False, removes the 0 directory
  • preserve_times (bool) – If False, removes the time directories
  • preserve_processors (bool) – If False, removes processor directories
clone(template_dir, copy_polymesh=True, copy_zero=True, copy_scripts=True, extra_patterns=None, clean_if_present=False)[source]

Create the case directory from a given template

Parameters:
  • template_dir (path) – Case directory to be cloned
  • copy_polymesh (bool) – Copy contents of constant/polyMesh to new case
  • copy_zero (bool) – Copy time=0 directory to new case
  • copy_scripts (bool) – Copy python and YAML files
  • extra_patterns (list) – List of shell wildcard patterns for copying
  • clean_if_present (bool) – Overwrite existing case
Raises:

IOError – If casedir exists and clean_if_present is False

decompose_case(dep_job_id=None, force=False)[source]

Decompose case if necessary

Parameters:
  • dep_job_id (int) – Job ID to wait for
  • force (bool) – Force rerun of decomposition tasks
get_input_dict(dictname)[source]

Return a CPL instance of the input file

For standard input files, prefer to use the accessors directly instead of this method. For example, case.controlDict, case.turbulenceProperties, etc.

Parameters:dictname (str) – File name relative to case directory
classmethod load(env=None, casedir=None, parent=None, json_file=None)[source]

Loads a previously setup case from persistence file

post_case(post_tasks=None, force=False)[source]

Execute post-processing tasks for this case

prep_case(prep_tasks=None, force=False)[source]

Execute pre-processing tasks for this case

If not tasks are provided, then uses the section prep from run_configuration that was passed during the setup phase.

Parameters:
  • prep_tasks (list) – List of tasks for Tasks
  • force (bool) – Force prep again if already run
reconstruct_case()[source]

Reconstruct a parallel case

run_tasks(task_file=None)[source]

Run tasks within case directory using the tasks file

save_state(**kwargs)[source]

Dump persistence file in JSON format

solve(force=False)[source]

Execute solve for this case

Parameters:force (bool) – Force resubmit even if previously submitted
status()[source]

Determine status of the run

Returns:Status of the run as a string
Return type:str
update(input_mods=None)[source]

Update the input files within a case directory

Parameters:input_mods (CaelusDict) – Dictionary with changes
LESProperties

Return LESProperties instance for this case

RASProperties

Return RASProperties instance for this case

basedir = None

Root directory containing the case

blockMeshDict

Return blockMeshDict instance for this case

casedir = None

Absolute path to the case directory

changeDictionaryDict

Return changeDictionaryDict instance for this case

cmlControls

Return cmlControls instance for this case

controlDict

Return controlDict instance for this case

decomposeParDict

Return decomposeParDict instance for this case

env = None

CML environment used to run this case

fvSchemes

Return fvSchemes instance for this case

fvSolution

Return fvSolution instance for this case

job_ids = None

Job IDs for SLURM/PBS jobs (internal use only)

logfile

The log file for the solver

name = None

Unique name for this case

parent = None

Instance of CMLSimCollection if part of a larger set

run_config = None

Dictionary containing run configuration (internal use only)

run_flags = None

Dictionary tracking status (internal use only)

solver

Return the solver used for this case

task_file = 'caelus_tasks.yaml'

Name of the task file for this case

transportProperties

Return transportProperties instance for this case

turbulenceProperties

Return turbulenceProperties instance for this case

udf = None

User-defined customization class

CML Parametric Run Manager

class caelus.run.parametric.CMLParametricRun(name, sim_dict, env=None, basedir=None)[source]

Bases: caelus.run.case.CMLSimCollection

A class to handle parametric runs

Parameters:
  • name (str) – Unique name for this parametric run
  • sim_dict (CaelusDict) – Dictionary with simulation settings
  • env (CMLEnv) – CML execution environment
  • basedir (path) – Path where the parametric run directories are created
setup()[source]

Setup the parametric case directories

setup_case(cname, tmpl_dir, cparams, runconf, clone_opts)[source]

Helper function to setup the cases

sim_dict = None

Dictionary containing the run settings

udf_params

Return the parameters for UDF script

udf_script

Return the UDF script

caelus.run.parametric.iter_case_params(sim_options, case_name_func)[source]

Normalize the keys and yield all possible run setups

caelus.run.parametric.normalize_variable_param(varspec)[source]

Helper function to normalize the different run matrix options

Caelus Job Manager Interface

class caelus.run.cmd.CaelusCmd(cml_exe, casedir=None, cml_env=None, output_file=None)[source]

Bases: object

CML execution interface.

CaelusCmd is a high-level interface to execute CML binaries within an appropriate enviroment across different operating systems.

Parameters:
  • cml_exe (str) – The binary to be executed (e.g., blockMesh)
  • casedir (path) – Absolute path to case directory
  • cml_env (CMLEnv) – Environment used to run the executable
  • output_file (file) – Filename to redirect all output
prepare_exe_cmd()[source]

Prepare the shell command and return as a string

Returns:The CML command invocation with all its options
prepare_shell_cmd()[source]

Prepare the complete command line string as executed

casedir = None

Case directory

cfg = None

CPL configuration object

cml_env = None

CML version used for this run

cml_exe = None

CML program to be executed

cml_exe_args = None

Arguments passed to the CML executable

mpi_extra_args

Extra arguments passed to MPI

num_mpi_ranks

Number of MPI ranks for a parallel run

output_file = None

Log file where all output and error are captured

parallel = None

Is this a parallel run

runner = None

Handle to the subprocess instance running the command

CML Execution Utilities

caelus.run.core.clean_casedir(casedir, preserve_extra=None, preserve_zero=True, preserve_times=False, preserve_processors=False, purge_mesh=False)[source]

Clean a Caelus case directory.

Cleans files generated by a run. By default, this function will always preserve system, constant, and 0 directories as well as any YAML or python files. Additional files and directories can be preserved by using the preserve_extra option that accepts a list of shell wildcard patterns of files/directories that must be preserved.

Parameters:
  • casedir (path) – Absolute path to a case directory.
  • preserve_extra (list) – List of shell wildcard patterns to preserve
  • purge_mesh (bool) – If true, also removes mesh from constant/polyMesh
  • preserve_zero (bool) – If False, removes the 0 directory
  • preserve_times (bool) – If False, removes the time directories
  • preserve_processors (bool) – If False, removes processor directories
Raises:

IOErrorclean_casedir will refuse to remove files from a directory that is not a valid Caelus case directory.

caelus.run.core.clean_polymesh(casedir, region=None, preserve_patterns=None)[source]

Clean the polyMesh from the given case directory.

Parameters:
  • casedir (path) – Path to the case directory
  • region (str) – Mesh region to delete
  • preserve_patterns (list) – Shell wildcard patterns of files to preserve
caelus.run.core.clone_case(casedir, template_dir, copy_polymesh=True, copy_zero=True, copy_scripts=True, extra_patterns=None)[source]

Clone a Caelus case directory.

Parameters:
  • casedir (path) – Absolute path to new case directory.
  • template_dir (path) – Case directory to be cloned
  • copy_polymesh (bool) – Copy contents of constant/polyMesh to new case
  • copy_zero (bool) – Copy time=0 directory to new case
  • copy_scripts (bool) – Copy python and YAML files
  • extra_patterns (list) – List of shell wildcard patterns for copying
Returns:

Absolute path to the newly cloned directory

Return type:

path

Raises:

IOError – If either the casedir exists or if the template_dir does not exist or is not a valid Caelus case directory.

caelus.run.core.find_caelus_recipe_dirs(basedir, action_file='caelus_tasks.yaml')[source]

Return case directories that contain action files.

A case directory with action file is determined if the directory succeeds checks in is_caelus_dir() and also contains the action file specified by the user.

Parameters:
  • basedir (path) – Top-level directory to traverse
  • action_file (filename) – Default is caelus_tasks.yaml
Yields:

Path to the case directory with action files

caelus.run.core.find_case_dirs(basedir)[source]

Recursively search for case directories existing in a path.

Parameters:basedir (path) – Top-level directory to traverse
Yields:Absolute path to the case directory
caelus.run.core.find_recipe_dirs(basedir, action_file='caelus_tasks.yaml')[source]

Return directories that contain the action files

This behaves differently than find_caelus_recipe_dirs() in that it doesn’t require a valid case directory. It assumes that the case directories are sub-directories and this task file acts on multiple directories.

Parameters:
  • basedir (path) – Top-level directory to traverse
  • action_file (filename) – Default is caelus_tasks.yaml
Yields:

Path to the case directory with action files

caelus.run.core.get_mpi_size(casedir)[source]

Determine the number of MPI ranks to run

caelus.run.core.is_caelus_casedir(root=None)[source]

Check if the path provided looks like a case directory.

A directory is determined to be an OpenFOAM/Caelus case directory if the system, constant, and system/controlDict exist. No check is performed to determine whether the case directory will actually run or if a mesh is present.

Parameters:root (path) – Top directory to start traversing (default: CWD)

Job Scheduler Interface

This module provides a unified interface to submitting serial, local-MPI parallel, and parallel jobs on high-performance computing (HPC) queues.

class caelus.run.hpc_queue.HPCQueue(name, cml_env=None, **kwargs)[source]

Bases: object

Abstract base class for job submission interface

name

Job name

Type:str
queue

Queue/partition where job is submitted

Type:str
account

Account the job is charged to

Type:str
num_nodes

Number of nodes requested

Type:int
num_ranks

Number of MPI ranks

Type:int
stdout

Filename where standard out is redirected

Type:path
stderr

Filename where standard error is redirected

Type:path
join_outputs

Merge stdout/stderr to same file

Type:bool
mail_opts

Mail options (see specific queue implementation)

Type:str
email_address

Email address for notifications

Type:str
qos

Quality of service

Type:str
time_limit

Wall clock time limit

Type:str
shell

shell to use for scripts

Type:str
mpi_extra_args

additional arguments for MPI

Type:str
Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete a job from the queue

get_queue_settings()[source]

Return a string with all the necessary queue options

static is_job_scheduler()[source]

Is this a job scheduler

static is_parallel()[source]

Flag indicating whether the queue type can support parallel runs

prepare_mpi_cmd()[source]

Prepare the MPI invocation

process_cml_run_env()[source]

Populate the run variables for script

process_foam_run_env()[source]

Populate the run variables for OpenFOAM execution

process_run_env()[source]

Process runtime environment for scripts

classmethod submit(script_file, job_dependencies=None, extra_args=None, dep_type=None)[source]

Submit the job to the queue

update(settings)[source]

Update queue settings from the given dictionary

write_script(script_name=None)[source]

Write a submission script using the arguments provided

Parameters:script_name (path) – Name of the script file
queue_name = '_ERROR_'

Identifier used for queue

script_body

The contents of the script submitted to scheduler

class caelus.run.hpc_queue.PBSQueue(name, cml_env=None, **kwargs)[source]

Bases: caelus.run.hpc_queue.HPCQueue

PBS Queue Interface

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete the PBS batch job using job ID

get_queue_settings()[source]

Return all PBS options suitable for embedding in script

classmethod submit(script_file, job_dependencies=None, extra_args=None, dep_type='afterok')[source]

Submit a PBS job using qsub command

job_dependencies is a list of PBS job IDs. The submitted job will run depending the status of the dependencies.

extra_args is a dictionary of arguments passed to qsub command.

The job ID returned by this method can be used as an argument to delete method or as an entry in job_dependencies for a subsequent job submission.

Parameters:
  • script_file (path) – Script provided to sbatch command
  • job_dependencies (list) – List of jobs to wait for
  • extra_args (dict) – Extra SLURM arguments
Returns:

Job ID as a string

Return type:

str

class caelus.run.hpc_queue.ParallelJob(name, cml_env=None, **kwargs)[source]

Bases: caelus.run.hpc_queue.SerialJob

Interface to a parallel job

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static is_parallel()[source]

Flag indicating whether the queue type can support parallel runs

prepare_mpi_cmd()[source]

Prepare the MPI invocation

class caelus.run.hpc_queue.SerialJob(name, cml_env=None, **kwargs)[source]

Bases: caelus.run.hpc_queue.HPCQueue

Interface to a serial job

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete a job from the queue

get_queue_settings()[source]

Return queue settings

static is_job_scheduler()[source]

Flag indicating whether this is a job scheduler

static is_parallel()[source]

Flag indicating whether the queue type can support parallel runs

prepare_mpi_cmd()[source]

Prepare the MPI invocation

classmethod submit(script_file, job_dependencies=None, extra_args=None)[source]

Submit the job to the queue

class caelus.run.hpc_queue.SlurmQueue(name, cml_env=None, **kwargs)[source]

Bases: caelus.run.hpc_queue.HPCQueue

Interface to SLURM queue manager

Parameters:
  • name (str) – Name of the job
  • cml_env (CMLEnv) – Environment used for execution
static delete(job_id)[source]

Delete the SLURM batch job using job ID

get_queue_settings()[source]

Return all SBATCH options suitable for embedding in script

prepare_srun_cmd()[source]

Prepare the call to SLURM srun command

classmethod submit(script_file, job_dependencies=None, extra_args=None, dep_type='afterok')[source]

Submit to SLURM using sbatch command

job_dependencies is a list of SLURM job IDs. The submitted job will not run until after all the jobs provided in this list have been completed successfully.

extra_args is a dictionary of extra arguments to be passed to sbatch command. Note that this can override options provided in the script file as well as introduce additional options during submission.

dep_type can be one of: after, afterok, afternotok afterany

The job ID returned by this method can be used as an argument to delete method or as an entry in job_dependencies for a subsequent job submission.

Parameters:
  • script_file (path) – Script provided to sbatch command
  • job_dependencies (list) – List of jobs to wait for
  • extra_args (dict) – Extra SLURM arguments
  • dep_type (str) – Dependency type
Returns:

Job ID as a string

Return type:

str

caelus.run.hpc_queue.caelus_execute(cmd, env=None, stdout=<open file '<stdout>', mode 'w'>, stderr=<open file '<stderr>', mode 'w'>)[source]

Execute a CML command with the right environment setup

A wrapper around subprocess.Popen to set up the correct environment before invoing the CML executable.

The command can either be a string or a list of arguments as appropriate for Caelus executables.

Examples

caelus_execute(“blockMesh -help”)

Parameters:
  • cmd (str or list) – The command to be executed
  • env (CMLEnv) – An instance representing the CML installation (default: latest)
  • stdout – A file handle where standard output is redirected
  • stderr – A file handle where standard error is redirected
Returns:

The task instance

Return type:

subprocess.Popen

caelus.run.hpc_queue.get_job_scheduler(queue_type=None)[source]

Return an instance of the job scheduler

caelus.run.hpc_queue.python_execute(pyscript, script_args='', env=None, log_file=None, log_to_file=True)[source]

Execute a python script with the right environment

This function will setup the correct CPL and CML environment and execute the python script within this environment. The user should only provide the name of the script and not python script as this it is this functions job to detect the correct python executable and execute within that environment.

If log_file isn’t provided it automatically creates a “py_*.log” file to redirect output messages from the script where * is replaced with the basename of the python script.

Parameters:
  • pyscript (path) – Filename of the python script
  • script_args (str) – Extra arguments to be passed to the python script
  • env (CMLEnv) – CML environment used for execution
  • log_file (filename) – Filename to redirect output to
  • log_to_file (bool) – Should outputs be redirected to log file
Returns:

The status of the execution

Return type:

status (int)