Beta cluster available for learning Slurm; new scheduler to be part of upcoming cluster updates

By | Flux, General Interest, Happenings, HPC, News

New HPC resources to replace Flux and updates to Armis are coming.  They will run a new scheduling system (Slurm). You will need to learn the commands in this system and update your batch files to successfully run jobs. Read on to learn the details and how to get training and adapt your files.

In anticipation of these changes, ARC-TS has created the test cluster “Beta,” which will provide a testing environment for the transition to Slurm. Slurm will be used on Great Lakes; the Armis HIPAA-aligned cluster; and a new cluster called “Lighthouse” which will succeed the Flux Operating Environment in early 2019.

Currently, Flux and Armis use the Torque (PBS) resource manager and the Moab scheduling system; when completed, Great Lakes and Lighthouse will use the Slurm scheduler and resource manager, which will enhance the performance and reliability of the new resources. Armis will transition from Torque to Slurm in early 2019.

The Beta test cluster is available to all Flux users, who can login via ssh at ‘beta.arc-ts.umich.edu’. Beta has its own /home directory, so users will need to create or transfer any files they need, via scp/sftp or Globus.

Slurm commands will be needed to submit jobs. For a comparison of Slurm and Torque commands, see our Torque to Slurm migration page. For more information, see the Beta home page.

Support staff from ARC-TS and individual academic units will conduct several in-person and online training sessions to help users become familiar with Slurm. We have been testing Slurm for several months, and believe the performance gains, user communications, and increased reliability will significantly improve the efficiency and effectiveness of the HPC environment at U-M.

The tentative time frame for replacing or transitioning current ARC-TS resources is:

  • Flux to Great Lakes, first half of 2019
  • Armis from Torque to Slurm, January 2019
  • Flux Operating Environment to Lighthouse, first half of 2019
  • Open OnDemand on Beta, which replaces ARC Connect for web-based job submissions, Jupyter Notebooks, Matlab, and additional software packages, fall 2018

U-M selects Dell EMC, Mellanox and DDN to Supply New “Great Lakes” Computing Cluster

By | Flux, General Interest, Happenings, HPC, News

The University of Michigan has selected Dell EMC as lead vendor to supply its new $4.8 million Great Lakes computing cluster, which will serve researchers across campus. Mellanox Technologies will provide networking solutions, and DDN will supply storage hardware.

Great Lakes will be available to the campus community in the first half of 2019, and over time will replace the Flux supercomputer, which serves more than 2,500 active users at U-M for research ranging from aerospace engineering simulations and molecular dynamics modeling to genomics and cell biology to machine learning and artificial intelligence.

Great Lakes will be the first cluster in the world to use the Mellanox HDR 200 gigabit per second InfiniBand networking solution, enabling faster data transfer speeds and increased application performance.

“High-performance research computing is a critical component of the rich computing ecosystem that supports the university’s core mission,” said Ravi Pendse, U-M’s vice president for information technology and chief information officer. “With Great Lakes, researchers in emerging fields like machine learning and precision health will have access to a higher level of computational power. We’re thrilled to be working with Dell EMC, Mellanox, and DDN; the end result will be improved performance, flexibility, and reliability for U-M researchers.”

“Dell EMC is thrilled to collaborate with the University of Michigan and our technology partners to bring this innovative and powerful system to such a strong community of researchers,” said Thierry Pellegrino, vice president, Dell EMC High Performance Computing. “This Great Lakes cluster will offer an exceptional boost in performance, throughput and response to reduce the time needed for U-M researches to make the next big discovery in a range of disciplines from artificial intelligence to genomics and bioscience.”

The main components of the new cluster are:

  • Dell EMC PowerEdge C6420 compute nodes, PowerEdge R640 high memory nodes, and PowerEdge R740 GPU nodes
  • Mellanox HDR 200Gb/s InfiniBand ConnectX-6 adapters, Quantum switches and LinkX cables, and InfiniBand gateway platforms
  • DDN GRIDScaler® 14KX® and 100 TB of usable IME® (Infinite Memory Engine) memory

“HDR 200G InfiniBand provides the highest data speed and smart In-Network Computing acceleration engines, delivering HPC and AI applications with the best performance, scalability and efficiency,” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “We are excited to collaborate with the University of Michigan, Dell EMC and DataDirect Networks, in building a leading HDR 200G InfiniBand-based supercomputer, serving the growing demands of U-M researchers.”

“DDN has a long history of working with Dell EMC and Mellanox to deliver optimized solutions for our customers. We are happy to be a part of the new Great Lakes cluster, supporting its mission of advanced research and computing. Partnering with forward-looking thought leaders as these is always enlightening and enriching,” said Dr. James Coomer, SVP Product Marketing and Benchmarks at DDN.

Great Lakes will provide significant improvement in computing performance over Flux. For example, each compute node will have more cores, higher maximum speed capabilities, and increased memory. The cluster will also have improved internet connectivity and file system performance, as well as NVIDIA Tensor GPU cores, which are very powerful for machine learning compared to prior generations of GPUs.

“Users of Great Lakes will have access to more cores, faster cores, faster memory, faster storage, and a more balanced network,” said Brock Palen, Director of Advanced Research Computing – Technology Services (ARC-TS).

The Flux cluster was created approximately 8 years ago, although many of the individual nodes have been added since then. Great Lakes represents an architectural overhaul that will result in better performance and efficiency. Based on extensive input from faculty and other stakeholders across campus, the new Great Lakes cluster will be designed to deliver similar services and capabilities as Flux, including the ability to accommodate faculty purchases of hardware, access to GPUs and large-memory nodes, and improved support for emerging uses such as machine learning and genomics.

ARC-TS will operate and maintain the cluster once it is built. Allocations of computing resources through ARC-TS include access to hundreds of software titles, as well as support and consulting from professional staff with decades of combined experience in research computing.

Updates on the progress of Great Lakes will be available at https://arc-ts.umich.edu/greatlakes/.

University of Michigan awarded Women in High Performance Computing chapter

By | General Interest, News

The University of Michigan has been recognized as one of the first Chapters in the new Women in High Performance Computing (WHPC) Pilot Program.

“The WHPC Chapter Pilot will enable us to reach an ever-increasing community of women, provide these women with the networks that we recognize are essential for them excelling in their career, and retaining them in the workforce.” says Dr. Sharon Broude Geva, WHPC’s Director of Chapters and Director of Advanced Research Computing (ARC) at the University of Michigan (U-M). “At the same time, we envisage that the new Chapters will be able to tailor their activities to the needs of their local community, as we know that there is no ‘one size fits all’ solution to diversity.”

“At WHPC we are delighted to be accepting the University of Michigan as a Chapter under the pilot program, and working with them to build a sustainable solution to diversifying the international HPC landscape” said Dr. Toni Collis, Chair and co-founder of WHPC, and Chief Business Development Officer at Appentra Solutions.

The process of selecting organizations to participate in the program accounted for potential conflicts of interest; Geva did not vote on U-M’s application.

About Women in High Performance Computing (WHPC) and the Chapters and Affiliates Pilot Program

Women in High Performance Computing (WHPC) was created with the vision to encourage women to participate in the HPC community by providing fellowship, education, and support to women and the organizations that employ them. Through collaboration and networking, WHPC strives to bring together women in HPC and technical computing while encouraging women to engage in outreach activities and improve the visibility of inspirational role models.

WHPC has launched a pilot program for groups to become Affiliates or Chapters. The program will share the knowledge and expertise of WHPC as well as help to tailor activities and develop diversity and inclusion goals suitable to the needs of local HPC communities. During the pilot, WHPC will work with the Chapters and Affiliates to support and promote the work of women in their organizations, develop crucial role models, and assist employers in the recruitment and retention of a diverse and inclusive HPC workforce.

WHPC is stewarded by EPCC at the University of Edinburgh. For more information visit http://www.womeninhpc.org.  

For more information on the U-M chapter, contact Dr. Geva at sgeva@umich.edu.

ARC-TS joins Cloud Native Computing Foundation

By | General Interest, Happenings, News

Advanced Research Computing – Technology Services (ARC-TS) at the University of Michigan has become the first U.S. academic institution to join the Cloud Native Computing Foundation (CNCF), a foundation that advances the development and use of cloud native applications and services. Founded in 2015, CNCF is part of the Linux Foundation.

CNCF announced ARC-TS’s membership at the KubeCon and CloudNativeCon event in Copenhagen. A video of the opening remarks by CNCF Executive Director Dan Kohn can be viewed on the event website.

“Our membership in the CNCF signals our commitment to bringing cloud computing and containers technology to researchers across campus,” said Brock Palen, Director of ARC-TS. “Kubernetes and other CNCF platforms are becoming crucial tools for advanced machine learning, pipelining, and other research methods. We also look forward to bring an academic perspective to the foundation.”

ARC-TS’s membership and participation in the group signals its adoption and commitment to cloud-native technologies and practices. Users of containers and other CNCF services will have access to experts in the field.

Membership gives the U-M research community input into in the continuing development of cloud-native applications, and within CNCF-managed and ancillary projects. U-M is the second academic institution to join the foundation, and the only one in the U.S.

ARC-TS begins work on new “Great Lakes” cluster to replace Flux

By | Flux, Happenings, HPC, News

Advanced Research Computing – Technology Services (ARC-TS) is starting the process of creating a new, campus-wide computing cluster, “Great Lakes,” that will serve the broad needs of researchers across the University. Over time, Great Lakes will replace Flux, the shared research computing cluster that currently serves over 300 research projects and 2,500 active users.

“Researchers will see improved performance, flexibility and reliability associated with newly purchased hardware, as well as changes in policies that will result in greater efficiencies and ease of use,” said Brock Palen, director of ARC-TS.

The Great Lakes cluster will be available to all researchers on campus for simulation, modeling, machine learning, data science, genomics, and more. The platform will provide a balanced combination of computing power, I/O performance, storage capability, and accelerators.

ARC-TS is in the process of procuring the cluster. Only minimal interruption to ongoing research is expected. A “Beta” cluster will be available to help researchers learn the new system before Great Lakes is deployed in the first half of 2019.

The Flux cluster is approximately 8 years old, although many of the individual nodes are newer. One of the benefits of replacing the cluster is to create a more homogeneous platform.

Based on extensive input from faculty and other stakeholders across campus, the new Great Lakes cluster will be designed to deliver similar services and capabilities as Flux, including the ability to accommodate faculty purchases of hardware, access to GPUs and large-memory nodes, and improved support for emerging uses such as machine learning and genomics. The cluster will consist of approximately 20,000 cores.

For more information, contact hpc-support@umich.edu, and see arc-ts.umich.edu/systems-services/greatlakes, where updates to the project will be posted.

The 2018 MICDE Symposium: Summary by Bradley Dice, Ph.D student in Physics and Computational Science

By | Uncategorized

This piece was first published in LinkedIn by Bradley Dice, U-M Ph.D student in Physics and Computational Science.

MICDE Symposium 2018: Computation, A Pillar of Science and a Lens to the Future

High-performance computing (HPC) is becoming an increasingly powerful tool in the hands of scientists, driving new discoveries in physical sciences, life sciences, and social sciences. The development of new (frequently domain-specific) approaches to machine learning and faster, smarter processing of sets of Big Data allows us to explore questions that were previously impossible to study. Yesterday, I presented a poster at the Michigan Institute for Computational Discovery & Engineering (MICDE) annual Symposium and attended a number of talks by researchers working at the intersection of high-performance computing and their domain science. The theme for the symposium was “Computation: A Pillar of Science and a Lens to the Future.”

Collaborative Computational Science with signac

My scientific work, and the work of my colleagues in the Glotzer lab, has been made vastly more efficient through the use of tools for collaborative science, particularly the signac framework. I presented a poster about how the signac framework (composed of open-source Python packages signacsignac-flow, and signac-dashboard) enables scientists to rapidly simulate, model, and analyze data. The name comes from painter Paul Signac, who, along with Georges Seurat, founded the style of pointillism. This neo-impressionist style uses tiny dots of color instead of long brushstrokes, which collectively form a beautiful image when the viewer steps back. This metaphor fits the way that a lot of science works: given only points of data, scientists aim to see the whole picture and tell its story. Since our lab studies materials, our “points” of data fit into a multidimensional parameter space, where quantities like pressure and temperature, or even particles’ shapes, may vary. Using this data, our lab computationally designs novel materials from nanoparticles and studies the physics of complex crystalline structures.

The core signac package, which acts as a database on top of the file system, helps organize and manage scientific data and metadata. Its companion tool signac-flow enables users to quickly define “workflows” that run on supercomputing clusters, determining what operations to perform and submitting the jobs to the cluster for processing. Finally, signac-dashboard (which I develop) provides a web-based data visualization interface that allows users to quickly scan for interesting results and answer scientific questions. These tools include tutorials and documentation, to help users acquaint themselves and get on to doing science as quickly as possible. Importantly, the tools are not specific to materials science. Many scientific fields have similar questions, and the toolkit can easily be applied in fields where exploration or optimization within parameter spaces are common, ranging from fluid mechanics to machine learning.

During the symposium, I learned a lot about how others are using scientific computing in their own work. The symposium speakers came from a wide range of fields, including biology, mathematics, and fluid dynamics. Some of my favorite talks are described below.

The Past: Phylogeny and Uncovering Life’s Origins

High-performance computing is enabling scientists to look in all sorts of directions, including into the past. Stephen Smith, Assistant Professor of Ecology and Evolutionary Biology at the University of Michigan, talked about his lab’s research in detecting evolutionary patterns using genomic data. From the wealth of genetic data that scientists have collected, the Smith lab aims to improve our understanding of the “tree of life”: the overarching phylogenetic tree that can explain the progress of speciation over time. Projects like Open Tree of Life and PHLAWD, an open-source C++ project to process data from the National Center for Biotechnology Information’s GenBank data source, are just two of the ways that open science and big data are informing our understanding of life itself.

The Present: From Algebra to Autonomy

Cleve Moler, the original author of the MATLAB language and chief mathematician, chairman, and cofounder of MathWorks, spoke about his career and how the tools MATLAB has provided for numerical linear algebra (and many other computational tasks) have been important for the development of science and engineering over the last 34 years. MATLAB is taught to STEM students in many undergraduate curricula, and is used widely across industry to simulate and model the behavior of real systems. Features like the Automated System Driving Toolbox are poised to play a role in autonomous vehicles and the difficult computational tasks inherent in their operation.

The Future: Parallel-in-Time Predictions and Meteorology

A significant challenge in weather and climate modeling is that supercomputer architectures are highly parallel, while many simulations of fluids are inherently serial: each timestep must be computed before the next timestep can begin. Beth Wingate, Professor of Mathematics at the University of Exeter and published poet, is developing a powerful approach that may change the way that such models work. Called “parallel-in-time,” it separates the effects of slow dynamics and fast dynamics, enabling parallel architectures to take advantage of longer timesteps and separate the work across many processors.

Conclusions

Computational science is growing rapidly, improving our ability to address the most pressing questions and the mysteries of our world. As new supercomputing resources come online, such as Oak Ridge National Laboratories’ Summit, the promise of exascale computing is coming ever closer to reality. I look forward to what the next year of HPC will bring to our world.

Interdisciplinary Committee on Organizational Studies (ICOS) Big Data Summer Camp, May 14-18

By | Data, Educational, General Interest, Happenings, News
Social and organizational life are increasingly conducted online through electronic media, from emails to Twitter feed to dating sites to GPS phone tracking. The traces these activities leave behind have acquired the (misleading) title of “big data.” Within a few years, a standard part of graduate training in the social sciences will include a hefty dose of “using of big data,” and we will all be utilizing terms like API and Python.
This year ICOS, MIDAS, and ARC are again offering a one-week “big data summer camp” for doctoral students interested in organizational research, with a combination of detailed examples from researchers; hands-on instruction in Python, SQL, and APIs; and group work to apply these ideas to organizational questions.  Enrollment is free, but students must commit to attending all day for each day of camp, and be willing to work in interdisciplinary groups.

The dates of the camp are all day May 14th-18th.

U-M launches Data Science Master’s Program

By | Educational, General Interest, Happenings, News

The University of Michigan’s new, interdisciplinary Data Science Master’s Program is taking applications for its first group of students. The program is aimed at teaching participants how to extract useful knowledge from massive datasets using computational and statistical techniques.

The program is a collaboration between the College of Engineering (EECS), the College of Literature Science and the Arts (Statistics), the School of Public Health (Biostatistics), the School of Information, and the Michigan Institute for Data Science.

“We are very excited to be offering this unique collaborative program, which brings together expertise from four key disciplines at the University in a curriculum that is at the forefront of data science,” said HV Jagadish, Bernard A. Galler Collegiate Professor of Electrical Engineering and Computer Science, who chairs the program committee for the program.

“MIDAS was a catalyst in bringing  faculty from multiple disciplines together to work towards the development of this new degree program,”  he added.

MIDAS will provide students in this program with interdisciplinary collaborations, intellectual stimulation, exposure to a broad range of practice, networking opportunities, and space on Central Campus to meet for formal and informal gatherings.

For more information, see the program website at https://lsa.umich.edu/stats/masters_students/mastersprograms/data-science-masters-program.html, and the program guide (PDF) at https://lsa.umich.edu/content/dam/stats-assets/StatsPDF/MSDS-Program-Guide.pdf.

Applications are due March 15.

HPC training workshops begin Tuesday, Feb. 13

By | Educational, Events, General Interest, Happenings, HPC, News

series of training workshops in high performance computing will be held Feb. 12 through March 6, 2018, presented by CSCAR in conjunction with Advanced Research Computing – Technology Services (ARC-TS).

Introduction to the Linux command Line
This course will familiarize the student with the basics of accessing and interacting with Linux computers using the GNU/Linux operating system’s Bash shell, also known as the “command line.”
Location: East Hall, Room B254, 530 Church St.
Dates: (Please sign up for only one)
• Tuesday, Feb. 13, 1 – 4 p.m. (full descriptionregistration)
• Friday, Feb. 16, 9 a.m. – noon (full description | registration)

Introduction to the Flux cluster and batch computing
This workshop will provide a brief overview of the components of the Flux cluster, including the resource manager and scheduler, and will offer students hands-on experience.
Location: East Hall, Room B254, 530 Church St.
Dates: (Please sign up for only one)
• Monday, Feb. 19, 1 – 4 p.m. (full description | registration)
• Tuesday, March 6, 1 – 4 p.m. (full description | registration)

Advanced batch computing on the Flux cluster
This course will cover advanced areas of cluster computing on the Flux cluster, including common parallel programming models, dependent and array scheduling, and a brief introduction to scientific computing with Python, among other topics.
Location: East Hall, Room B250, 530 Church St.
Dates: (Please sign up for only one)
• Wednesday, Feb. 21, 1 – 5 p.m. (full description | registration)
• Friday, Feb. 23, 1 – 5 p.m. (full description | registration)

Hadoop and Spark workshop
Learn how to process large amounts (up to terabytes) of data using SQL and/or simple programming models available in Python, R, Scala, and Java.
Location: East Hall, Room B250, 530 Church St.
Dates: (Please sign up for only one)
• Thursday, Feb. 22, 1 – 5 p.m. (full description | registration)