Older (2002-2009)


Retired resource — Herculaneaum+ Iroquois + Avalanche ( – Nov 2011)

Older Resources:

Most of these have been decommissioned as of November, 2011.

  • 32 CPU SGI Origin 300, 32GB RAM, 788GB of scratch space
  • 4 CPU SGI Origin 300, 2Gb RAM 63GB scratch space
  • 28 CPU (generic vendor) Beowulf cluster. Each node has 1 cpu, 512MB RAM,  120 Gb of scratch (shared across all nodes).
  • 156 (38 compute nodes (152), 1 head node (4)) CPU Beowulf cluster from Western Scientific. Each node has 2 – Dual Core Opteron CPU’s (total 4 cpus), 2Gb Ram, and share 2TB of scratch space across all 38 nodes.

    Total: 220 CPUs 128GB RAM, 4 TB Storage

A photo of the two Origin 300s and the Beowulf Cluster.                                       Our more recent aquisition:

olympus-pompeiiherculaneum

 

                 Of the rack on the left, the two units on top are the                                               The rack on the left is our new 2nd Beowulf Cluster.
8 processor machine. The remaining 8 units comprise                                       It has 39 Dual CPU Opteron’s for a total of 156 CPU’s.
the 32 processor machine and nearly a terabyte of disk storage.
              The rack on the right, the 1st Beowulf, is a collection of 30 1U Linux PC’s.

NETWORK
Consortium members access the MERCURY servers over the internet using encrypted channels (SSH).

The Origin 300s have 100BT NICs, and are connected to a Cisco Catalyst 3254 switch, which is then directly connected, via gigabit ethernet, to Hamilton’s backbone. The Beowulf clusters utilize Gigabit Network (1000BT) interconnects straight into the Hamilton Backbone.


HARDWARE
The SGI Origin 300s are comprised of compute modules, with 4 MIPS processors per module, bonded together via NUMA link cables connected to a sprouter. Storage is provided by a TP900 disk array, with 15 disks at 72GB each, for a total of 1080 GB of disk storage.

The 1st Linux cluster has a custom-built 1.8GHz AMD head node which houses the scratch directories and is connected to the compute nodes via private, switched 100BT Ethernet network. The compute nodes are PC Power & Cooling single-processor Intel PIII 1U systems. These systems were chosen for their low-heat output. The system is using the Warewulf Cluster Toolkit  for management of the nodes.

The 2nd Linux cluster was purchased from Western Scientific Inc. It is comprised of 1 head node and 38 compute nodes. The head node features two – 2 GHz Dual Core AMD Opteron(tm) Processor 270’s. With the Dual core technology each node appears as a 4 CPU compute node. This makes the total (38×4) 152 compute node CPU’s.The head node is also a 4 CPU machine boasting 2 Terabytes of scratch space shared across all of the compute nodes.


SYSTEM SOFTWARE

The Origin 300s run on Irix, SGI’s own UNIX-like OS. We have installed and support SGI MIPS Pro c, c++, Fortran77 and Fortran90 compilers.

The Beowulf clusters run RedHat Enterprise Linux version 4. Configured with the intel compilers.

CHEMISTRY SOFTWARE
We currently are running Gaussian, DivCon, NWChem, Jaguar, QSite, Macromodel, DL_POLY, VASP and Amber.


ACCOUNTS
Access to MERCURY resources is granted to consortium members and their students. If you have not explicitly been granted access to MERCURY and believe you should have, please contact the primary investigator at your school. No activity on MERCURY systems is permitted other than computational chemistry research, performed by authorized users. To request an account on MERCURY, contact the Administrator.

ACCESSING MERCURY

     MERCURY systems can be accessed via the Internet using SSH. SFTP/SCP access is also available for transferring files to and from your computer. Access from the Internet is only allowed from 3 hosts. chem.hamilton.edu (jake), olympus, and herculaneum. The resources have jobs scheduled on them from the master pbs server on chem.hamilton.edu. Logging into this computer is the preferred method of access. For running jobs see the maui and pbs user documentation.

In the event that you are unable to access the machines at Hamilton, please attempt to run a traceroute to the machine you are trying to access, and email the results to us. You can run traceroutes using the following commands/software (Note: substitute the ip address of the machine you are trying to access for X.X.X.X):

UNIX systems (incl. MacOSX): traceroute X.X.X.X
Windows systems: tracert X.X.X.X (from the command line)
Mac 9.x or earlier: WhatRoute

BACKUPS

     Regular backups are taken of the home directories, however it is always a good idea to keep local copies of your critical files. We currently maintain backups for approximately 60 days.

Note: Scratch files are not backed up and can be removed from the system after they are not accessed for more than 30 days.

HOME DIRECTORIES

      Your home directory is located under /home/userid. When referring to your local home directory it is best to reference it as ~/. This way you can always be assured that your job files will run accurately no matter where your home directory is located. Especially when transferring files between different sites.


CHEMISTRY SOFTWARE
The following software is installed:

Gaussian
DivCon
Macromodel
Jaguar
Amber 
VASP  


COMPILERS

Compilers are available for the following languages. All compilers are SGI MIPS Pro compilers.

c
c++
Fortran77
Fortran90


EDITORS

The following text editors are installed:

vi
pico
nano
nedit (requires setting up an X11 server for graphical displays)
emacs