Maverick User News

Training: Introduction to OpenMP using the Interactive Parallelization Tool (IPT)

Posted by Jason Allison on Dec 1, 2017 11:58:48 AM

December 14th, 2017 9am-1pm CT Texas Advanced Computing Center ACB 1.104 J.J. Pickle Research Campus 10100 Burnet Rd. Austin, TX 78758 OpenMP is one of the most popular paradigms to exploit the now ubiquitous manycore and multi-core processors. In this beginner-level training session, we will...

December 14th, 2017 9am-1pm CT
Texas Advanced Computing Center
ACB 1.104
J.J. Pickle Research Campus
10100 Burnet Rd. Austin, TX 78758

OpenMP is one of the most popular paradigms to exploit the now ubiquitous manycore and multi-core processors. In this beginner-level training session, we will provide an overview of the basic concepts of OpenMP. We will introduce the trainees to the Interactive Parallelization Tool (IPT) that is designed for parallelizing serial C/C++ programs semi-automatically. The participants in the training session will be introduced to OpenMP and will learn to use IPT for parallelizing their C/C++ applications.

Prerequisites: Experience working in a Linux environment, and familiarity with C/C++/Fortran or any other programming language.

We are offering the training to both in-person and webcast participants. Local participants are strongly encouraged to attend in person.

To attend the training in person, please contact me via email at jasona@tacc.utexas.edu.

To attend via webcast, please enroll for the training at:
https://learn.tacc.utexas.edu/mod/chat/view.php?id=30

You will need to sign in with your TACC User Portal account and password to enroll.

Maverick: New queues to support long gpu runs

Posted by Chris Hempel on Nov 27, 2017 4:34:48 PM

Two new queues have been configured on Maverick to accommodate GPU jobs that require more runtime than allowed in the gpu queue. These two queues are configured as follows: gpu-long - up to 72 hours runtime, one node per job (i.e. sbatch -N 1 and/or -n 20 or less) - maximum of 8 jobs allowed in...

Updated on Dec 1, 2017 8:59:34 AM

The runtime limit on the gpu queue has been increased to 24 hours.

Original Posting

Two new queues have been configured on Maverick to accommodate GPU jobs that require more runtime than allowed in the gpu queue. These two queues are configured as follows:

gpu-long
- up to 72 hours runtime, one node per job (i.e. sbatch -N 1 and/or -n 20 or less)
- maximum of 8 jobs allowed in queue per user

gpu-verylong
- up to 120 hours runtime, one node per job
- maximum of 3 jobs allowed in queue per user

These queues are available immediately for use and do not require special permission to access. The gpu queue remains with a 12-hour runtime limit.

Please submit any questions you have via the TACC Consulting System.

https://portal.tacc.utexas.edu/tacc-consulting

Maverick Maintenance 21 November 2017

Posted by Matthew Edeker on Nov 6, 2017 11:26:03 AM

Maverick will not be available from 8 a.m. to 17:00 p.m. (CT) on Tuesday, 21 November 2017. Maintenance on the Slurm scheduler will be performed during this time.

Updated on Nov 21, 2017 3:02:42 PM

Maverick is back in production. 

Original Posting

Maverick will not be available from 8 a.m. to 17:00 p.m. (CT) on Tuesday, 21 November 2017. Maintenance on the Slurm scheduler will be performed during this time.

Stampede2 Extended Outage Begins Friday, 20 Oct 2017 at 8am CDT

Posted by Jason Allison on Oct 17, 2017 2:50:15 PM

Stampede2 will be unavailable for a four-day period beginning Friday, 20 Oct 2017 at 8 am CDT. This extended outage will allow: (1) full system science and benchmarking runs; as well as, (2) configuration, integration, and testing activities to prepare for Stampede2 “Phase 2” deployment. Phase 2...

Updated on Oct 23, 2017 4:49:37 PM

The Stampede2 system maintenance will need to be extended until at least 6PM CDT tomorrow to finish the work of integrating the phase 2 nodes into the system.


Original Posting

Stampede2 will be unavailable for a four-day period beginning Friday, 20 Oct 2017 at 8 am CDT. This extended outage will allow: (1) full system science and benchmarking runs; as well as, (2) configuration, integration, and testing activities to prepare for Stampede2 “Phase 2” deployment.

Phase 2 will feature the addition of 1,736 “Skylake” Xeon nodes to the system. Stampede2 will be completely unavailable during the maintenance window. We expect to reopen the KNL nodes to general use by 6 pm, Monday, 23 Oct 2017.  

Please submit any questions you have via the TACC User Portal:
https://portal.tacc.utexas.edu/tacc-consulting

MPI Foundations I and II - Oct 6th, 2017 - Space Available

Posted by Jason Allison on Oct 4, 2017 2:42:54 PM

We still have space available for this Friday's MPI Foundations I and II training events. Please register ASAP to avoid missing out on this opportunity. Local attendees are strongly encouraged to attend in-person. To register and for more information please visit:...

We still have space available for this Friday's MPI Foundations I and II training events. Please register ASAP to avoid missing out on this opportunity. Local attendees are strongly encouraged to attend in-person.

To register and for more information please visit: https://portal.tacc.utexas.edu/training

October 2017 TACC Training Events

Posted by Jason Allison on Sep 15, 2017 4:14:45 PM

I am pleased to announce the following training events are being offered to both in-person and webcast participants for October 2017. Local participants are strongly encouraged to attend in person. 10/4/17 - Introduction To Hadoop And Spark On Wrangler 10/6/17 - MPI Foundations I and MPI...

I am pleased to announce the following training events are being offered to both in-person and webcast participants for October 2017. Local participants are strongly encouraged to attend in person.

10/4/17 - Introduction To Hadoop And Spark On Wrangler
10/6/17 - MPI Foundations I and MPI Foundations II
10/11/17 - Introduction to Scala/Spark
10/18/17 - Data Analysis Using Hadoop/Spark

To register and for more information please visit: https://portal.tacc.utexas.edu/training

Training: Introduction to OpenMP using the Interactive Parallelization Tool (IPT)

Posted by Jason Allison on Aug 24, 2017 4:40:22 PM

September 14th, 2017 1pm-5pm CT Texas Advanced Computing Center ACB 1.104 J.J. Pickle Research Campus 10100 Burnet Rd. Austin, TX 78758 OpenMP is one of the most popular paradigms to exploit the now ubiquitous manycore and multi-core processors. In this beginner-level training session, we will...

Updated on Sep 12, 2017 2:29:55 PM

We still have spaces available for the Introduction to OpenMP using the Interactive Parallelization Tool (IPT) September 14th, 2017. Please register ASAP to avoid missing out on this opportunity.

Original Posting

September 14th, 2017 1pm-5pm CT
Texas Advanced Computing Center
ACB 1.104
J.J. Pickle Research Campus
10100 Burnet Rd. Austin, TX 78758

OpenMP is one of the most popular paradigms to exploit the now ubiquitous manycore and multi-core processors. In this beginner-level training session, we will provide an overview of the basic concepts of OpenMP. We will introduce the trainees to the Interactive Parallelization Tool (IPT) that is designed for parallelizing serial C/C++ programs semi-automatically. The participants in the training session will be introduced to OpenMP and will learn to use IPT for parallelizing their C/C++ applications.

Prerequisites: Experience working in a Linux environment, and familiarity with C/C++/Fortran or any other programming language.

We are offering the training to both in-person and webcast participants. Local participants are strongly encouraged to attend in person.

For more information and to register for either the in-person or webcast session, please visit:
https://portal.tacc.utexas.edu/training#/user?training=upcoming

Maverick Status 10-12 August 2017

Posted by Sergio Leal on Aug 8, 2017 2:40:20 PM

Maverick is reserved for 48 hours beginning Thursday, 8/10/2017 at 8 AM CDT to support a large-scale, real-time simulation project. During that time, no new jobs will start, but login nodes and filesystems will remain available. Also, jobs that cannot run during this 48-hour window will be held in...

Maverick is reserved for 48 hours beginning Thursday, 8/10/2017 at 8 AM CDT to support a large-scale, real-time simulation project. During that time, no new jobs will start, but login nodes and filesystems will remain available. Also, jobs that cannot run during this 48-hour window will be held in queue until the system resumes production, when they will become eligible to run. Maverick is scheduled to resume normal operations at 8 AM CDT on Saturday, 8/12/2017.


Please submit any questions you may have via the TACC User Portal

TACC Maintenance 18 July, 2017

Posted by Jacob Getz on Jul 11, 2017 9:47:53 AM

Access to all TACC systems will be unavailable from 7:00 AM CDT on July 18 until 8:00 AM CDT on July 19 to allow for upgrades to the core network switch hardware and to perform system maintenance on the Stockyard global filesystem.   Users will have intermittent access to all TACC services and...

Updated on Jul 19, 2017 6:25:46 PM

The /work filesystem has been restored on all TACC production systems.   All held user jobs have been released from the hold and should be running or queued to run now.
 
Thank you, 

-TACC

Updated on Jul 19, 2017 9:55:21 AM

Maintenance has been extended for the Stockyard global filesystem.  An update will be posted to User News when resources are back in full production.

Original Posting

Access to all TACC systems will be unavailable from 7:00 AM CDT on July 18 until 8:00 AM CDT on July 19 to allow for upgrades to the core network switch hardware and to perform system maintenance on the Stockyard global filesystem.   Users will have intermittent access to all TACC services and systems until the core network switch upgrade is complete.  

During this maintenance window, the production systems Stampede2, Stampede, Lonestar5, Maverick, Wrangler, Ranch, and Hikari will also be down for system maintenance and unavailable to users until after the Stockyard global filesystem maintenance has been completed.   Updates to user news will be sent once network services have been restored and when the production systems are restored to normal operations.

-TACC Team

TACC Sitewide Power Event 6-23-17

Posted by Jacob Getz on Jun 23, 2017 9:30:03 AM

At 1:15 AM Central Time, TACC and our facilities experienced a power outage. Every TACC system was affected. Power has since been restored. An update will be posted to User News when resources are back in full production.


-TACC Team

Updated on Jun 23, 2017 10:37:04 PM

All TACC systems and services have been restored to normal operation after the power loss this morning.  Please submit a ticket if you encounter any problems with access to TACC systems or services.

Original Posting

At 1:15 AM Central Time, TACC and our facilities experienced a power outage. Every TACC system was affected. Power has since been restored. An update will be posted to User News when resources are back in full production.


-TACC Team

Maverick Satus 23 June 2017

Posted by Sergio Leal on Jun 23, 2017 2:23:17 PM

Maverick has recovered from our earlier power outage and is back in production as of 14:00 CST today.

Maverick has recovered from our earlier power outage and is back in production as of 14:00 CST today.

TACC status 19 July, 2017

Posted by David Littrell on Jun 21, 2017 11:02:50 AM

Access to all TACC systems will be unavailable from 7:00 AM CDT on July 18 until 8:00 AM CDT on July 19 to allow for upgrades to the core network switch hardware and to perform system maintenance on the Stockyard global filesystem.   Users will have intermittent access to all TACC services and...

Updated on Jun 21, 2017 11:22:47 AM

Jetstream will also be affected. 

Original Posting

Access to all TACC systems will be unavailable from 7:00 AM CDT on July 18 until 8:00 AM CDT on July 19 to allow for upgrades to the core network switch hardware and to perform system maintenance on the Stockyard global filesystem.   Users will have intermittent access to all TACC services and systems until the core network switch upgrade is complete.  During this maintenance window, the production systems Stampede2, Stampede, Lonestar5, Maverick, Wrangler and Hikari will also be down for system maintenance and unavailable to users until after the Stockyard global filesystem maintenance has been completed.   Updates to user news will be sent once network services have been restored and when the production systems are restored to normal operations.