|

Haystack (2009-2013)

Haystack (Oct 2009 – June 2013)

MERCURY SGI Altix 3700 Bx2

Summary

  1. SGI Altix 3700 Bx2 has
  2. 128 1.6GHz Intel Itanium2 processors with 6MB last level cache each
  3. 640GB of globally shared memory
  4. 7.2TB RAID5 SGI InfiniteStorage 220
  5. NUMAlink interconnect
  6. a single image of Novell SUSE Linux Enterprise Server 10.0 SP2 + SGI ProPack 6

Hardware Configuration

  1. It physically appears as
  2. Two 40U SGI racks
  3. 8 compute (CR) bricks and 2 router(R) brick per rack
  4. First rack has an I/O (IX) brick and the second rack has an IS220 storage unit
  5. Three power bays per rack

Each one of the 16 CR-bricks have 8 single core Intel Itanium2 processors
  6. CR-bricks interconnected to R-bricks and IX-bricks via 80 NUMAlink cables
  7. IX-brick has 10/100/1000 ethernet, CDROM, 11 PCIX slots, 2 146GB System Disks
  8. The storage unit is an SGI InfiniteStorage220; 2 Trays – 7.2TB RAID5, 4 – 4+1, 1 Hot Spare 21 – 450GB 15K RPM SAS Drives

Software

  1. The machine the following software and more will be added as necessary.
  2. Chemistry
  3. Gaussian09 C.03; Gaussian03 Rev. D.03, E.03

ii.NWchem 5.1.1

iii. PSI 3.4

  1. AMBER 9.1
  2. MPQC 2.4
  3. ORCA 2.8
  4. General tools aside from what is available in stock SLES 10.2
  5. Intel Compilers (11.0+)
  6. Intel MKL libraries (10.2+)

iii. OpenMPI 1.4.2+

  1. SGI MPT 1.23
  2. SGI Propack 6

Access

  1. The machine is accessible to MERCURY members.
  2. MERCURY members

i.You can access the machine (haystack) through MERCURY resources at Hamilton College
ssh username@jake.hpc.hamilton.edu. From jake, ssh haystack
(Email Steve Young at Hamilton or Berhane Temelso at Bucknell if you have any questions.)

ii.Here are a few details on using haystack. – Home directories are NFS mounted from Hamilton servers,. so expect some latency when you first log in
– Binaries are also NFS mounted from Hamilton. All the applications you ran on Hamilton’s SGI Altix machines will run on haystack.
– Since there is no queue manager (PBS, SGE, LSF … etc) running on haystack, you have to run everything interactively and take precautions to ensure the machine is not overloaded.
– 64 of the 128 CPUs are always available, but the other 64 are powered on or off depending on the machine’s use. In the summer when there is a lot of need for computer time, the machine operates at full capacity. During the academic year, half the machine’s capacity is sufficient to meet computational needs.

  1. Bucknell community

i.You can request for accounts
(Email: Berhane Temelso)

  1. You can access the machine from campus using ssh (ssh username@haystack.bucknell.edu)

iii. For security reasons, the machine is not accessible directly from outside of campus. You would need to

  1. Either need to use a Bucknell VPN tunnel to access the machine using ssh OR
  2. You can log into a Bucknell server (say linuxremote.eg.bucknell.edu) and ssh to haystack from there. The second method is typically faster