As previously mentioned, running batches kicks ass! In that post I quickly described how to use bat files to get your workstation churning at all times. This time I’ll show you the necessary steps to convert this into python. Why you probably ask? Well, with python we can easily start to extend functionality to do cool stuff like logging solution time for each job in the batch, create real-time convergence plots and even schedule jobs. I will keep this post as basic as possible, so do not fear if you aren’t familiar with python – you’ll only need the ability to read and some basic logical thinking. If not, run.

Python, Hello World

Python is easy to learn. If you intend to use the Abaqus Scripting Interface (ASI), do yourself a favour and start with basic python – there is heaps of resources online [1] – don’t worry too much about which site you use, but note that Abaqus Python is using 2.7.2 for 6.14 and 2016, and you should preferably learn this (and not Python 3.xx) if you want to expand the scripts shown here.

Having a proper text-editor when writing scripts is key – syntax highlighting is a necessity – if this is new to you, I would recommend you to install Notepad++ [2] and use this when writing the following scripts. Here is an example of the simplest possible python script there is, and also instruction on how to run it with abaqus python. Note that we have to launch the ‘abaqus python’ command in the same folder as we have stored the script file.

Create the script:
print “Hello World!”
Save this file as “helloWorld.py”, this will invoke the syntax highlighting.

Run in the command Line
abaqus python helloWorld.py

This should output “Hello World!” in the command line.

1-helloWorld

2-Running helloWorld

Our bat files in Python

In the previous article, we created a bat file that looped over all our input files and invoked the solver for each file:

FOR %%i IN (*.inp) DO call abaqus job=%%i cpus=2 int
ECHO job(s) done

To do exactly the same in python, we can import two libraries that will help us to loop through the files and spawn system commands.

4 - runDynamicBatch

The execution of the script starts at line 6, where we use glob.glob(“*.inp”) to create a list of all the input files in the folder, the for loop makes sure that line 7 and 8 is run for each inputFile.  On line 7 we create a new argument, where the only unique variable is inputFile, in line 8 we send this command back to the command line which will output a summary log exactly like the bat file did.

Run the script by cd’ing in to the folder and write the command
abaqus python runDynamicBatch.py

5-running dynBatch

Logging Wall-Clock times

As an engineer, I love to gather statistics. When I come back to my desk after the weekend, I’d like to know how fast my jobs did run and compare this against other variations to improve performance and perhaps plan the next nightly run. This information is available in the .sta files, but with python, I can easily create a script that outputs the time it took to run my jobs and store it to a single file. I’ll show you how:

logfile-time

By simply adding a function that appends the wallclock time to a single file we can get this output. Take a look at the code below:
logfile-time-code

Onwards

The purpose of the last example isn’t to show you how to create scripts, but merely how powerful it is. There is quite some variations within this discipline to how jobs are managed. With only a few lines of python you can actually create something quite powerful, this is especially true for mid-to smaller businesses where the simulation teams are closer.

You can with only a few lines implement a simple queueing system that regularly checks a folder for un-run jobs – and append functionality needed to make this run constantly, like a pause key or similar. The possibilities are endless, and you might be amazed by how little effort that is required.

References

[1] http://labs.codecademy.com/ , http://learnpythonthehardway.org/, https://www.reddit.com/r/learnpython