logo

Menu:



Highlights:

May 01, 2018:
Welcome to Dr. Tuguldur Togo Odbadrakh joins our group after finishing his PhD at University of Pittsburgh.

Feb 13, 2018:
Dr. Shields is named as a 2018 Cottrell Scholar TREE Award recipient.

July 20-23, 2017:
Our group hosted the 16th MERCURY conference at Furman

June 26, 2017:
Skylight is in full production.

March 13, 2017:
Dr. Temelso is named as a 2017 Foresight Fellow in Computational Chemistry.

Feb 3, 2017:
Dr. Shields gave an invited talk at UVA.

Dec 15, 2016:
ArbAlign, our tool for aligning molecules is made publicly available here.

Sept 01, 2016:
MERCURY consortium was awarded an NSF-MRI grant to purchase a new computer cluster.

August 01, 2016:
After six wonderful years at Bucknell, our group has moved to Furman.

July 21-23, 2016:
Our group hosted the 15th MERCURY conference

March 18, 2016:
Our collaboration on tunneling in water hexamers was published in Science. See the paper, and perspective piece and video describing its significance.

March 24, 2015:
Dr. Shields received the ACS Award for Research at an Undergraduate Institution in Denver, CO.

June 18, 2013:
The new MERCURY machine, Marcy arrived. See its wiki for details.

May 18, 2012:
Our collaborative work on water hexamers got published in Science

Read more...



Print

Links:

- Furman Home
- Furman Chem
- MERCURY

 Computational Systems

Marcy:

Detailed Hardware Specs

Master/Login Node (1)
2U MASTER NODE W/QDR IB:

* (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
* (1) SupermicroTM 2U Server and rails (P/N# SYS-6027R-TRF)
* 64GB Memory (8GB x 8) 4GB of memory per core
* (2) 1TB Hard Drive(s) (mirrored RAID 1 for OS and Home)

I/O Node (1)
4U STORAGE NODE WITH QDR INFINIBAND:

* (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
* (1) SuperMicroTM Motherboard (P/N# MBD-X9DRi-F)
* (1) SuperMicroTM 4U Chassis and rails (P/N# CSE-847E16-R1400LPB)
* 64GB Memory (8GB x 8) 4GB of memory per core
* (2) 1TB Hard Drive(s) (mirrored RAID 1 for OS and Home)
* (27) 900GB 10K SAS HDD (RAID 6, 20TB Usable Storage)
* (27) 3.5" to 2.5" Hard Drive Converter Tray(s) for the 900GB SAS 10K Drive(s)
* (1) LSITM 8-Port (4 Internal/4 External) RAID Card
* (1) MellanoxTM Single-Port 40Gbps QDR InfiniBand Card
* (1) CentOSTM “OS” Software

Thin (32GB RAM) Nodes (12) 
(3) 2U TWIN/TWIN COMPUTE NODE(S) WITH QDR INFINIBAND:

(Four (4) Independent Node(s) per unit = Twelve (12) Total)

* (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s) (per node)
* (1) SuperMicroTM 2U Twin/Twin SuperServer and rails (P/N# SYS-6027TR-HTQRF)
* 32GB Memory (4GB x 8) 2GB of memory per core (per node)
* (1) 1TB Hard Drive (per node)

Medium (64 GB RAM) Nodes (4) 
(1) 2U TWIN “FAT” COMPUTE NODE(S) W/QDR IB: 

(Two (2) Independent Node(s) per unit: Four (4) Total)

* (2) IntelTM Sandy Bridge, E5-2650-v2, 2.60GHz, Eight-Core, 95Watt Processor(s) (per node)
* (1) SupermicroTM 2U Twin Superserver and rail (P/N# SYS-6027TR-DTQRF)
* 64GB Memory (8GB x 8) 8GB of memory per core (per node)
* (1) 1TB Hard Drive (per node)

Fat (128 GB RAM) Nodes (8) 
(4) 2U TWIN “FAT” COMPUTE NODE(S) W/QDR IB: 

(Two (2) Independent Node(s) per unit: Eight (8) Total)

* (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s) (per node)
* (1) SupermicroTM 2U Twin Superserver and rail (P/N# SYS-6027TR-DTQRF)
* 128GB Memory (16GB x 8) 8GB of memory per core (per node)
* (1) 1TB Hard Drive (per node)

GPU-Containing Nodes (2)
(2) 1U GPU CAPABLE COMPUTE NODE WITH QDR INFINIBAND: 
(2) NVIDIA K20 GPU CARD: (* One (1) K20 GPU Integrated Per GPU Capable Node)


* (2) IntelTM Sandy Bridge, E5-2660, 2.20GHz, Eight-Core, 95Watt Processor(s)
* (1) SuperMicroTM 1U GPU/MIC Capable Server and rails (P/N# SYS-1027GR-TRF)
* 64GB Memory (8GB x 8) 4GB of memory per core
* (1) 1TB Hard Drive * (1) MellanoxTM Single-Port 40Gbps QDR InfiniBand Card
* (1) 3 meter QSFP to QSFP Cable * (1) 10’ CAT5 Cable * Power Cable(s)

 

TOTAL COMPUTE CORE(S) =  352
TOTAL CORE(S) =  448 (400 + 16 master + 16 I/O)
20TB TOTAL USABLE STORAGE

 

 

 

Quick Summary of Hardware Specs

The machine has 28 16-core node:, a master/login node, a storage (I/O) node, 12 thin (32GB RAM) nodes, 4 medium (64GB RAM) nodes, 8 fat (128GB RAM) nodes and 2 GPU-containing (16 core CPU + 1 GPU) nodes.

Master/login node: 16 Intel E5-2660 cores, 64GB RAM, 1 TB mirrored disk
Storage (I/O) node: 16 Intel E5-2660 cores, 64GB RAM, 17 TB disk array
Fat compute nodes (node1-8):  16 Intel E5-2660 cores, 128GB RAM, 2 TB striped disk

Thin compute nodes (node9-21):  16 Intel E5-2660 cores, 32GB RAM, 1 TB disk

GPU nodes (node21-22) : 16 Intel E5-2660 cores, 64GB RAM, 1 Nvidia Tesla K20 GPU 
Medium compute nodes (node23-26):  16 Intel E5-2650-v2 cores, 64GB RAM, 1 TB disk

Access

The machine (marcy.bucknell.edu) is accessible to MERCURY members from anywhere via SSH
. Previous restrictions limiting access to MERCURY campuses only has been lifted.

ssh -l username marcy.bucknell.edu

Once you log into the 'master' node, you can submit jobs to individual compute nodes via PBS. You can run short tests on the master node, but nothing takes a lot of time, memory or disk space.
 

Software

The machine has a lot of software stored in /usr/local/Dist and more will be added as necessary. Most of this software is added to your path at login, but some need to be loaded up using the 'modules' tool. Here are the most commonly used packages.

A. Chemistry

i. Gaussian09 A.04

ii.NWchem 6

iii. PSI 4

iv. AMBER 12

vi. ORCA 2.9

vi. NAMD

v. Openbabel

B. General tools

i. Intel Compilers (12, 13)

ii. Intel MKL libraries (10.3, 13)

iii. OpenMPI (1.3.3, 1.6.4)

iv. Mvapich2 (1.9)

v. Python (2.6, 2.7, 3.3)

C. Modules loaded at login (execute 'module list' at login)

i. modules

ii. torque-maui

iii. mvapich2

D. Other available modules (execute 'module avail')

 

Job Submission

Marcy uses the common Torque-Maui job scheduler most commonly referred to as PBS. You can submit, monitor, alter, delete jobs using commands like 'qsub', 'qstat', 'qalter', 'qdel', ...  each with its a large set of options.

Here are some sample PBS job submission files for commonly used software.

Gaussian09 

#PBS -q batch
#PBS -l mem=16gb
#PBS -l nodes=1:ppn=8
#PBS -l walltime=24:00:00
#PBS -j oe
#PBS -e j-test
#PBS -N j-test
#PBS -V

set echo
cd $PBS_O_WORKDIR

g09 test.com

ORCA 

#PBS -q batch
#PBS -l mem=16gb
#PBS -l nodes=1:ppn=8
#PBS -l walltime=24:00:00
#PBS -j oe
#PBS -e j-test
#PBS -N j-test
#PBS -V

set echo
cd $PBS_O_WORKDIR

module load mpi/openmpi-1.6.4_gnu-4.4.7_ib
runorca-2.9.csh test $PBS_JOBID

AMBER 

#PBS -q batch
#PBS -l mem=16gb
#PBS -l nodes=1:ppn=8
#PBS -l walltime=24:00:00
#PBS -j oe
#PBS -e j-test
#PBS -N j-test
#PBS -V

set echo
cd $PBS_O_WORKDIR

mpiexec -np 8 sander.MPI -O -i md.in -o md.out -p prmtop -c min.rst -r md.rst -x md.crd