Arm DDT Debugger at TACC
Last update: May 18, 2020
Arm DDT is a symbolic, parallel debugger providing graphical debugging of C, C++ and Fortran threaded and parallel codes (MPI, OpenMP, and Pthreads applications). DDT is available on all TACC compute resources. Use the DDT Debugger with the MAP Profiler to develop and analyze your HPC applications.
Before running any debugger, the application code must be compiled with the "
-g" and "
-O0" options as shown below:
login1$ mpif90 -g -O0 mycode.f90
login1$ mpiCC -g -O0 mycode.c
Follow these steps to set up your debugging environment on Frontera, Stampede2, Lonestar5 and other TACC compute resources.
Enable X11 forwarding. To use the DDT GUI, ensure that X11 forwarding is enabled when you
sshto the TACC system. Use the "
-X" option on the
sshcommand line if X11 forwarding is not enabled in your SSH client by default.
localhost$ ssh -X email@example.com
Load the DDT module on the remote system along with any other modules needed to run the application:
$ module load ddt mymodule1 mymodule2
NOTE: On Stampede2, there are 2 DDT modules,
ddt_knl, because the KNL's require a different license.
$ module load ddt_knl mymodule1 mymodule2 # for KNL nodes
$ module load ddt_skx mymodule1 mymodule2 # for SKX nodes
Start the debugger:
$ ddt myprogram
If this error message appears…
ddt: cannot connect to X server
…then X11 forwarding was not enabled (Step 1.) or the system may not have local X11 support. If logging in with the "
-X" flag doesn't fix the problem, please contact the help desk for assistance.
Click the "Run and Debug a Program" button in the "DDT - Welcome" window:
This displays the "Run" window, where you specify the executable path, command-line arguments, and processor count. Once set, these values remain from one session to the next.
Select each of the "Change" buttons in this window, and adjust the job parameters.
In the "Queue Submission Parameters" window, fill in the following fields:
default queue is "
skx-dev" for Stampede2, and "
development" for other systems
Allocation/Project to charge the batch job to
You must set the Project field to a valid project id. When you login, a list of the projects associated with your account and the corresponding balance should appear. Click OK, and you'll return to the "Run" window.
Back in the "Run" window, set the number of tasks you will need in the "Number of processes" box and the number of nodes you will be requesting. If you are debugging an OpenMP program, set the number of OpenMP threads also.
Finally, click "Submit". A submitted status box will appear:
Once your job is launched by the SLURM scheduler and starts to run, the DDT GUI will fill up with a code listing, stack trace, local variables and a project file list. Double click on line numbers to set breakpoints, and then click on the play symbol (>) in the upper left corner to start the run.
By starting DDT from a login node you let it use X11 graphics, which can be slow. Using a VNC connection or the visualization portal is faster, but has its own annoyances. Another way to use DDT is through DDT's Remote Client using "reverse connect". The remote client is a program running entirely on your local machine, and the reverse connection means that the client is contacted by the DDT program on the cluster, rather than the other way around.
Download and install a remote client. Get the latest client available.
Under "Remote Launch" make a new configuration:
Fill in your login name and the cluster to connect to, for instance "
stampede2.tacc.utexas.edu". The remote installation directory is stored in the "
$TACC_DDT_DIR" environment variable after the module is loaded.
Make the connection; you'll be promped for your password and two-factor code:
From any login node, submit a batch job where the "
ibrun" line is replaced by:
$ ddt --connect -n $SLURM_NPROCS ./yourprogram
When your batch job (and therefore your DDT execution) starts, the remote client will ask you to accept the connection:
Your DDT session will now use the remote client.