Maverick User News

Training: OpenMP Training Events - April 12th and 13th, 2018

Posted by Jason Allison on Mar 22, 2018 12:19:22 PM

We are pleased to announce the following OpenMP training events are being offered to both in-person and webcast participants April 12th and 13th, 2018. The courses include hands-on exercises on TACC systems. Local participants are strongly encouraged to attend in person. Instructors will be...

We are pleased to announce the following OpenMP training events are being offered to both in-person and webcast participants April 12th and 13th, 2018. The courses include hands-on exercises on TACC systems. Local participants are strongly encouraged to attend in person. Instructors will be available after class to consult on individual projects with in person participants.

4/12/18 9am-12:30pm CT - Introduction to OpenMP
This course will introduce participants to the OpenMP threading model, and describe the basic constructs necessary to parallelize loops on multi-core architectures. Topics include the fork/join threading model, using OpenMP directives, and loop parallelization. The fundamentals of hybrid computing (MPI & OpenMP) will be explained and illustrated.

4/13/18 9am-12:30pm CT - Advanced OpenMP
This course will provide an introduction to OpenMP optimization techniques for multi-core and vectorized architectures. Topics will include OpenMP SIMD directives, configuring OpenMP thread affinity, tasking, and task dependences.

To register and for more information please visit: https://learn.tacc.utexas.edu/

If you have any questions please contact me at jasona@tacc.utexas.edu

TACC Maintenance 11 March, 2018

Posted by Mitchell Collins-Bailey on Feb 26, 2018 5:32:29 PM

Access to all TACC systems will be unavailable from 9:00 AM CDT  until 2:00 PM CDT on March 11, 2018 to allow for upgrades to the TACC core network hardware. Jobs will continue to run, but users will have no access to TACC services and systems until the upgrade is complete.

Access to all TACC systems will be unavailable from 9:00 AM CDT  until 2:00 PM CDT on March 11, 2018 to allow for upgrades to the TACC core network hardware. Jobs will continue to run, but users will have no access to TACC services and systems until the upgrade is complete.

TACC Maintenance 11 March, 2018

Posted by Mitchell Collins-Bailey on Feb 26, 2018 5:28:56 PM

Access to all TACC systems will be unavailable from 9:00 AM CDT  until 2:00 PM CDT on March 11, 2018 to allow for upgrades to the TACC core network hardware. Jobs will continue to run, but users will have no access to TACC services and systems until the upgrade is complete.

Access to all TACC systems will be unavailable from 9:00 AM CDT  until 2:00 PM CDT on March 11, 2018 to allow for upgrades to the TACC core network hardware. Jobs will continue to run, but users will have no access to TACC services and systems until the upgrade is complete.

TACC Winter Break Schedule

Posted by Chris Hempel on Dec 21, 2017 6:42:54 AM

TACC personnel will observe the University of Texas at Austin winter break from 5 p.m. (CST) on Thursday, 21 December 2017, and will resume normal business hours on Tuesday, 2 January 2018. A staff member will be on site to monitor the status of all TACC resources. TACC support staff will monitor...

TACC personnel will observe the University of Texas at Austin winter break from 5 p.m. (CST) on Thursday, 21 December 2017, and will resume normal business hours on Tuesday, 2 January 2018. A staff member will be on site to monitor the status of all TACC resources. TACC support staff will monitor the consulting system throughout the break and address critical system issues. The staff will address other issues beginning Tuesday, 2 January 2018.


Please submit any questions you may have via the TACC Consulting System.
https://portal.tacc.utexas.edu/tacc-consulting

Training: Introduction to OpenMP using the Interactive Parallelization Tool (IPT)

Posted by Jason Allison on Dec 1, 2017 11:58:48 AM

December 14th, 2017 9am-1pm CT Texas Advanced Computing Center ACB 1.104 J.J. Pickle Research Campus 10100 Burnet Rd. Austin, TX 78758 OpenMP is one of the most popular paradigms to exploit the now ubiquitous manycore and multi-core processors. In this beginner-level training session, we will...

December 14th, 2017 9am-1pm CT
Texas Advanced Computing Center
ACB 1.104
J.J. Pickle Research Campus
10100 Burnet Rd. Austin, TX 78758

OpenMP is one of the most popular paradigms to exploit the now ubiquitous manycore and multi-core processors. In this beginner-level training session, we will provide an overview of the basic concepts of OpenMP. We will introduce the trainees to the Interactive Parallelization Tool (IPT) that is designed for parallelizing serial C/C++ programs semi-automatically. The participants in the training session will be introduced to OpenMP and will learn to use IPT for parallelizing their C/C++ applications.

Prerequisites: Experience working in a Linux environment, and familiarity with C/C++/Fortran or any other programming language.

We are offering the training to both in-person and webcast participants. Local participants are strongly encouraged to attend in person.

To attend the training in person, please contact me via email at jasona@tacc.utexas.edu.

To attend via webcast, please enroll for the training at:
https://learn.tacc.utexas.edu/mod/chat/view.php?id=30

You will need to sign in with your TACC User Portal account and password to enroll.

Maverick: New queues to support long gpu runs

Posted by Chris Hempel on Nov 27, 2017 4:34:48 PM

Two new queues have been configured on Maverick to accommodate GPU jobs that require more runtime than allowed in the gpu queue. These two queues are configured as follows: gpu-long - up to 72 hours runtime, one node per job (i.e. sbatch -N 1 and/or -n 20 or less) - maximum of 8 jobs allowed in...

Updated on Dec 1, 2017 8:59:34 AM

The runtime limit on the gpu queue has been increased to 24 hours.

Original Posting

Two new queues have been configured on Maverick to accommodate GPU jobs that require more runtime than allowed in the gpu queue. These two queues are configured as follows:

gpu-long
- up to 72 hours runtime, one node per job (i.e. sbatch -N 1 and/or -n 20 or less)
- maximum of 8 jobs allowed in queue per user

gpu-verylong
- up to 120 hours runtime, one node per job
- maximum of 3 jobs allowed in queue per user

These queues are available immediately for use and do not require special permission to access. The gpu queue remains with a 12-hour runtime limit.

Please submit any questions you have via the TACC Consulting System.

https://portal.tacc.utexas.edu/tacc-consulting

Maverick Maintenance 21 November 2017

Posted by Matthew Edeker on Nov 6, 2017 11:26:03 AM

Maverick will not be available from 8 a.m. to 17:00 p.m. (CT) on Tuesday, 21 November 2017. Maintenance on the Slurm scheduler will be performed during this time.

Updated on Nov 21, 2017 3:02:42 PM

Maverick is back in production. 

Original Posting

Maverick will not be available from 8 a.m. to 17:00 p.m. (CT) on Tuesday, 21 November 2017. Maintenance on the Slurm scheduler will be performed during this time.