Skip to main content

News Release

Stampede Era Begins

Contact: Paul Redfern
Cell: (607) 227-1865

FOR RELEASE: April 10, 2013

ITHACA, N.Y. - Stampede, one of the largest computing systems in the world for open science research, is now operational and available to researchers. Deployed at the Texas Advanced Computing Center (TACC), Stampede is part of the National Science Foundation Extreme Science and Engineering Discovery Environment (XSEDE) program.

Susan Mehringer, Cornell University Center for Advanced Computing (CAC) Assistant Director and her consulting team were selected by TACC to provide Stampede training to the national research community. David Lifka, CAC Director, serves as the Architecture and Design Coordinator for XSEDE.

Cornell researchers who would like to be informed about Stampede training opportunities or who would like assistance in the Stampede allocation process may contact CAC at

Helpful links are:

The new Stampede Dell PowerEdge C8220 Cluster is configured with 6,400 Dell DCS Zeus compute nodes, each with two 2.7 GHz E5-2680 Intel Xeon (Sandy Bridge) processors. With 32 GB of memory and 50 GB of storage per node, users have access to an aggregate of 205 TB of memory and 275+ TB of local storage. The cluster is also equipped with Intel Xeon Phi coprocessors based on Intel Many Integrated Core (Intel MIC) architecture. Stampede will deliver 2+ PF of peak performance on the main cluster and 7+ PF of peak performance on the Intel Xeon Phi coprocessors.

Stampede also provides access to 16 large memory nodes with 1TB each of RAM, and 128 nodes containing an NVIDIA Kepler 2 GPU, giving users access to large shared-memory computing and remote visualization capabilities, respectively. Compute nodes have access to a 14 PB Lustre Parallel file system. An FDR InfiniBand switch fabric interconnects the nodes through a fat-tree topology with a point-to-point bandwidth of 40GB/sec (unidirectional speed).

Stampede is intended primarily for parallel applications scalable to tens of thousands of cores. Normal batch queues will enable users to run simulations up to 24 hours. Jobs requiring run times and more cores than allowed by the normal queues will be run in a special queue after approval of TACC staff. Serial and development queues will also be configured. In addition, users will be able to run jobs using thousands of the Intel Xeon Phi coprocessors via the same queues to support massively parallel workflows.