Using the Allinea DDT Debugger and Allinea MAP profiler on


DDT is a parallel debugger from Allinea. It has a graphical user interface and can be used for debugging Fortran, C, and C++ programs that have been parallelized with MPI and OpenMP, as well as UPC programs. Additional information can be found at

MAP is a low-overhead profiler from Allinea for both scalar and MPI programs.

Both tools share a common environment.

In most cases it is much faster and easier to find an error in the code or the ways to optimize the performance using a debugger and a profiler than through the use of numerous print statements.

The Allinea DDT and MAP user guide can be downloaded from

Note that even though DDT and MAP support GPU languages, such as HMPP, OpenMP Accelerators, CUDA and CUDA Fortran, we don't have a license to use DDT and MAP on the GPUs.

Starting DDT and MAP


This example shows how to compile an MPI C program and to run it under DDT debugger on two compute nodes. The provided example does not have an error. After running it as explained below, try to introduce errors in the code (for example, declare array name to be of size 10) and run again.
  1. Copy the example program to your directory:
    	cp /home/SAMPLES/mpihello.c ./
  2. Compile the program using debug compiler flag:
    	mpiicc -g mpihello.c 
  3. Start DDT debugger:
    	ddt &
    First a small windows saying "Allinea DDT" will open, then it will be replaced by a larger window.
  4. Click on the green triangle to run and debug a program.

  5. In the application field enter the full path of the executable that you would like to debug. If you issued the cp command above from your home directory, enter "/home/<user>/a.out", where <user> is your user name on the student cluster.

  6. Select checkboxes by "MPI" and "Submit to Queue" options.

  7. Click on the "Configure..." button in the "Submit to Queue" option.

  8. In the "Submission template file" field enter "/shared/allinea/forge/templates/hpc-class.qtf". After entering this, four other fields will be populated.

  9. Make sure that "Specify in Run window" is selected for all three variables: NUM_PROCS_TAG, NUM_NODES_TAG, PROCS_PER_NODE_TAG. Click on "OK" button.

  10. In the MPI section select 4 processes, 2 nodes and 2 processes per node.

  11. Click on "Submit" button at the bottom.

  12. A new window to enter queue parameters will appear. Just click on "OK" button.

  13. DDT will submit your job into queue. When job will start running, DDT will attach to the processes. In the central part of the window you will see the code.

  14. Right-click on line 18 (on the line number) and select "Add breakpoint for All". Do the same at line 21.

  15. Click on the green triangle in the upper left corner of the window. A small window should appear telling that processes 0-3 stopped at breakpoint in line 18. Click on "Pause".

  16. Explore the information provided in the lower part of the window (click on different tabs) and the values of variables displayed on the right. Select ping squares with numbers 0 through 3 to see information for various processes. Notice that MyId will have different values for different processes (select the Locals tab to see local variables), name will have some garbage in it.

  17. Once again click on the green triangle in the upper left corner of the window. The processes will be stopped at the next breakpoint, and a small notification window will appear. Select "Pause" and see what changed. In the Input/Output window you should now see Hello messages from all 4 processes, and in the variable window name should now have the correct value.

  18. To finish the execution, click on the green triangle, and when prompted about restarting session, select "No".

  19. You can start new session by selecting appropriate menu option in the File menu.

DDT Reference

The following is a list of the important functions allowed by DDT:

The following summarizes the most useful items from the toolbars at the top of the DDT window:

Values of local variables are displayed in the right center window for the MPI process selected for the place where execution has stopped. Multi-dimensional array values can be viewed by selecting Multi-Dimensional Array Viewer listed under View.