Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Research Computing on the Great Lakes cluster

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Introduction to the Great Lakes cluster and batch computing with Slurm

By |

OVERVIEW

This workshop will provide a brief overview of the components of the Great Lakes Cluster. The main body of the workshop will cover the resource manager and scheduler, creating submissions scripts to run jobs and the options available in them, and hands-on experience. By the end of the workshop, every participant should have created a submission script, submitted a job, tracked its progress, and collected its output. Participants will have several working examples from which to build their own submissions scripts in their own home directories.

PRE-REQUISITES

This course assumes familiarity with the Linux command line as might be got from the CSCAR/ARC-TS workshop Introduction to the Linux Command Line. In particular, participants should understand how files and folders work, be able to create text files using the nano editor, be able to create and remove files and folders, and understand what input and output redirection are and how to use them.

INSTRUCTORS

Dr. Charles J Antonelli
Research Computing Services
LSA Technology Services

Charles is a High Performance Computing Consultant in the Research Computing Services group of LSA TS at the University of Michigan, where he is responsible for high performance computing support and education, and was an Advocate to the Departments of History and Communications. Prior to this, he built a parallel data ingestion component of a novel earth science data assimilation system, a secure packet vault, and worked on the No. 5 ESS Switch at Bell Labs in the 80s. He has taught courses in operating systems, distributed file systems, C++ programming, security, and database application design.

John Thiels
Research Computing Services
LSA Technology Services

Mark Champe
Research Computing Services
LSA Technology Services

MATERIALS

COURSE PREPARATION

In order to participate successfully in the workshop exercises, you must have a user login, a Slurm account, and be enrolled in Duo. The user login allows you to log in to the cluster, create, compile, and test applications, and prepare jobs for submission. The Slurm account allows you to submit those jobs, executing the applications in parallel on the cluster and charging their resource use to the account. Duo is required to help authenticate you to the cluster.


USER LOGIN

If you already have a Flux user login, you don’t need to do anything.  Otherwise, go to the Flux user login application page at: https://arc-ts.umich.edu/fluxform/ .

Please note that obtaining a user account requires human processing, so be sure to do this at least two business days before class begins.


SLURM ACCOUNT

We create a Slurm account for the workshop so you can run jobs on the cluster during the workshop and for one day after for those who would like additional practice. The workshop job account is quite limited and is intended only to run examples to help you cement the details of job submission and management. If you already have an existing Slurm account, you can use that, though if there are any issues with that account, we will ask you to use the workshop account.


DUO AUTHENTICATION

Duo two-factor authentication is required to log in to the cluster. When logging in, you will need to type your UMICH (AKA Level 1) password as well as authenticate through Duo in order to access Great Lakes.

If you need to enroll in Duo, follow the instructions at Enroll a Smartphone or Tablet in Duo.

Please enroll in Duo before you come to class.

LAPTOP PREPARATION

You do not need to bring your own laptop to class. The classroom contains Windows or Mac computers, which require your uniqname and UMICH (AKA Level 1) password to login, and that have all necessary software pre-loaded.

If you want to use a laptop for the course, you are welcome to do so:  please see our web page on Preparing your laptop to use Flux. However, if there are problems connecting your laptop, you will be asked to switch to the provided computer for the class. We cannot stop to debug connection issues with personal or departmental laptops during the class.

If you are unable to attend the presentation in person we will be offering a link into the live course via BlueJeans. Please register as if attending in person.  This will put you on the wait list but we will get your account setup for remote attendance.

Sliding into Slurm: An early look at U-M’s new high-performance computing environment

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Sliding into Slurm: An early look at U-M’s new high-performance computing environment

By |

This workshop will provide a brief overview of the the new HPC environment and is intended for current Flux and Armis users.  We will use the temporary Beta HPC cluster to demonstrate how jobs will be submitted and managed under the new Great Lakes, Armis2, and Lighthouse clusters available later this year.

There are many differences between the familiar Flux environment and that of the new HPC clusters, including a new batch scheduling system, a new interactive batch job environment, a new HPC web portal, a new module environment, and a new on-demand-only job accounting system.

We will cover these differences in the workshop, and provide hands-on training in creating and running job submission scripts in the new HPC environment.  Students are expected to be conversant with the Linux command line and have experience in creating, submitting, and troubleshooting PBS batch scripts.

Beta cluster available for learning Slurm; new scheduler to be part of upcoming cluster updates

By | Flux, General Interest, Happenings, HPC, News

New HPC resources to replace Flux and updates to Armis are coming.  They will run a new scheduling system (Slurm). You will need to learn the commands in this system and update your batch files to successfully run jobs. Read on to learn the details and how to get training and adapt your files.

In anticipation of these changes, ARC-TS has created the test cluster “Beta,” which will provide a testing environment for the transition to Slurm. Slurm will be used on Great Lakes; the Armis HIPAA-aligned cluster; and a new cluster called “Lighthouse” which will succeed the Flux Operating Environment in early 2019.

Currently, Flux and Armis use the Torque (PBS) resource manager and the Moab scheduling system; when completed, Great Lakes and Lighthouse will use the Slurm scheduler and resource manager, which will enhance the performance and reliability of the new resources. Armis will transition from Torque to Slurm in early 2019.

The Beta test cluster is available to all Flux users, who can login via ssh at ‘beta.arc-ts.umich.edu’. Beta has its own /home directory, so users will need to create or transfer any files they need, via scp/sftp or Globus.

Slurm commands will be needed to submit jobs. For a comparison of Slurm and Torque commands, see our Torque to Slurm migration page. For more information, see the Beta home page.

Support staff from ARC-TS and individual academic units will conduct several in-person and online training sessions to help users become familiar with Slurm. We have been testing Slurm for several months, and believe the performance gains, user communications, and increased reliability will significantly improve the efficiency and effectiveness of the HPC environment at U-M.

The tentative time frame for replacing or transitioning current ARC-TS resources is:

  • Flux to Great Lakes, first half of 2019
  • Armis from Torque to Slurm, January 2019
  • Flux Operating Environment to Lighthouse, first half of 2019
  • Open OnDemand on Beta, which replaces ARC Connect for web-based job submissions, Jupyter Notebooks, Matlab, and additional software packages, fall 2018