Old Dominion University
A to Z Index   |   Directories

College of Sciences


Computer Science


Resources

Computer Science Department News

Systems Group has configured a new HPC Cluster for Departmental use. The new cluster's FQDN is hpcd.cs.odu.edu.

Hardware Configuration:

The HPCD Cluster is a group of 35 Dell PowerEdge Servers. The 32 compute nodes are Dell PowerEdge R410 machines with 2 Xeon E5504 processor @ 2.0 GHz and 16GB of RAM.

The head node is a Dell PowerEdge R410 with 2 Xeon E5520 processor @ 2.7GHz. and 8GB of RAM.

1 Dell PowerEdge R510 with 2 Xeon E5530 processor @ 2.4 GHz and 16GB of RAM is utilized as pvfs2 I/O node. 1 Dell PowerEdge R410 with 2 Xeon E5520 processors @ 2.7GHz and 8GB of RAM is the pvfs2 metadata node.

Naming Scheme:

The head node is named hpcd and subsequent nodes are named as compute-0-0 and so on till compute-0-31. hpcd.cs.odu.edu is a canonical name for the head node.

Interconnect:

All servers are interconnected with Infiniband at 40 Gbps (4x qdr) and Gigabit Ethernet. Infiniband is the recommended media for MPI applications.

File Systems:

Each node in the cluster has a local ext3 scratch file system mounted on /localscratch. This partition can be utilized for temporary storage and I/O that does not need to be shared between the nodes.

The cluster uses PFVS and has one I/O node with 16 Gigabytes of memory, and one Metadata node on a Dell R510 with 8 Gigabytes of memory. The PVFS shared space is 2 Tb located at /pvfs2scratch. In Order to optimally use the pfvf2 file system, one must utilize MPI I/O or ROMEO for parallel read and write applications.

There is also a 2 Tb NFS shared space located at /nfsscratch. Both PVFS and NFS are run over Gigabit Ethernet. Software is located at /export/software. For Serial reads and writes that need to be shared between the nodes, NFS will provide better performance than PVFS.

Note: /localscratch is not protected by any means. In case of a node failure, all data will be gone. PVFS and NFS shared data is stored on RAID 5 storage and has a more robust configuration.

MPI implementations:

In /export/software one can find a variety of MPI implementations. Some implementations are configured with Infiniband, while others with Gigabit Ethernet.

Infiniband: /export/software/mvapich1
/export/software/mvapich2
/export/software/openmpi-iBand

Gigabit Ethernet: /export/software/lam-ifort
/export/software/mpich
/export/software/mpich2
/export/software/mpich2-threads
/export/software/mpich-ifort
/export/software/openmpi-gigE

Intel Products:

We have installed the Intel Fortran and C++ Compiler. Current version is 11.1.072

Intel Fortran: /export/software/ifort

Inter C++: /export/software/icc

Older versions can be found at /export/software/intel

We have installed Intel Vtune and Intel Thread Checker.

/export/software/vtune

/export/software/itt