logo

Menu:



Highlights:

June 26, 2017:
Skylight is in full production.

Apr 11, 2017:
Our paper on ArbAlign is published here.

March 13, 2017:
Dr. Temelso is named as a 2017 Foresight Fellow in Computational Chemistry.

Feb 3, 2017:
Dr. Shields gave an invited talk at UVA.

Sept 01, 2016:
MERCURY consortium was awarded an NSF-MRI grant to purchase a new computer cluster.

August 01, 2016:
After six wonderful years at Bucknell, our group has moved to Furman.

July 21-23, 2016:
Our group hosted the 15th MERCURY conference for computational chemistry at Bucknell.

March 18, 2016:
Our collaboration on tunneling in water hexamers was published in Science. See the paper, and perspective piece and video describing its significance.

July 23-25, 2015:
Our group hosted the 14th MERCURY conference for computational chemistry at Bucknell.

June 18, 2013:
The new MERCURY machine, Marcy arrived. See its wiki for details.

May 18, 2012:
Our collaborative work on water hexamers got published in Science

Read more...



Print

Links:

- Furman Home
- Furman Chem
- MERCURY

High Performance Computing

@MERCURY Consortium

Aside from my research and mentoring roles, I have been managing and maintaining MERCURY consortium's high performance computing (HPC) resources as a system adminstrator along with Steve Young at Hamilton College since 2010. In that role, I provide technical research support to MERCURY users and promote the use of HPC in chemistry and other fields.

MERCURY consortium members share high performance computing (HPC) facilities to advance their research. Haystack (2009-2013) was an SGI BX2 3700 shared memory machine. After it was decommissioned, a Linux cluster named Marcy is hosted at Bucknell/Furman University took over and it has been in full production since May 2013. Our newest and most powerful cluster is Skylight and it has been in full production since June 2017.



@NSF-XSEDE

NSF-XSEDE
We have had yearly allocations at various NSF-TeraGrid/XSEDE's facilities since 2009. XSEDE is an open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource. Using high-performance network connections, the XSEDE integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. Currently, XSEDE resources include more than a petaflop of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers can also access more than 100 discipline-specific databases. With this combination of resources, the XSEDE is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research.

XSEDE is coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the Resource Provider sites: Indiana University, the Louisiana Optical Network Initiative, National Center for Supercomputing Applications, the National Institute for Computational Sciences, Oak Ridge National Laboratory, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, Texas Advanced Computing Center, and University of Chicago/Argonne National Laboratory, and the National Center for Atmospheric Research.



@DOE-NERSC

NERSC

We have had yearly allocation to use DOE's NERSC computing facilities.

NERSC is the flagship high performance scientific computing facility for research sponsored by the U.S. Department of Energy Office of Science. NERSC, a national facility located at Lawrence Berkeley National Laboratory, is a world leader in providing resources and services that accelerate scientific discovery through computation.