NASA Logo, National Aeronautics and Space Administration
High-End Computing Program
NCCS HOME USER SERVICES SYSTEMS NCCS PRIMER NEWS GET MORE HELP

Discover

The Discover system is an assembly of multiple Linux scalable units built upon commodity components. The first scalable unit was installed in the fall of 2006, and the NCCS continues to expand this highly successful computing platform.

Discover derives its name from the NASA adage of "Explore. Discover. Understand."

Discover System Details

System Architecture
As mentioned above, the system architecture is made up of multiple scalable units. The list below describes the total aggregate components of the system and its individual scalable units.
Aggregate
  • 67 Racks (compute, storage, switches, and more)
  • 1.0018 Pflop/s
  • 43,240 Total Cores
File System and Storage
  • IBM GPFS
  • 2.46 PB Storage
Operating Environment
  • Operating System: SLES
  • Job Scheduler: SLURM
  • Compilers: C, C++, Fortran (Intel and PGI)

Individual Scalable Units

Base Unit (Decommissioned: September 2011)
  • Manufacturer: Linux Networx/SuperMicro
  • 3.33 Tflop/s
  • 520 Total Cores
  • 2 dual-core processors per node
  • 4 GB of memory per node
  • 3.2 GHz Intel Xeon Dempsey (2 flop/s per clock)
  • Production: 4Q 2006
Scalable Unit 1+
  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3,096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Production: 2Q 2007, Upgraded 4Q 2011
Scalable Unit 2+
  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3,096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Production: 3Q 2007, Upgraded 4Q 2011
Scalable Unit 3+
  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Production: 3Q 2008, Upgraded 3Q 2011
Scalable Unit 4+
  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Production: 4Q 2008, Upgraded 3Q 2011
Scalable Unit 5 (Decommissioned: June 2013)
  • Manufacturer: IBM
  • 46.23 Tflop/s
  • 4,128 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 quad-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Nehalem (4 flop/s per clock)
  • Interconnect: Infiniband DDR
  • Production: 3Q 2009
Scalable Unit 6 (Decommissioned: June 2013)
  • Manufacturer: IBM
  • 46.23 Tflop/s
  • 4,128 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 quad-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Nehalem (4 flop/s per clock)
  • Interconnect: Infiniband DDR
  • Production: 1Q 2010
Scalable Unit 7
  • Manufacturer: Dell
  • 161.3 Tflop/s
  • 14,400 Total Cores
  • Dell C6100 Compute Nodes (1200 nodes)
  • 2 hex-core processors per node
  • 24 GB of memory per node (2GB per core)
  • 2.8 GHz Intel Xeon Westmere
  • Interconnect: Infiniband DDR
  • Production: 2Q 2011
Scalable Unit 8
  • Manufacturer: IBM
  • 606 Tflop/s
  • 7,680 Intel Xeon Sandy Bridge processor cores
  • IBM iDataPlex Compute Nodes
  • 480 Intel Many Integrated Core (Phi) co-processors
  • 2 oct-core processors per node
  • 32 GB of memory per node (2GB per core)
  • 2.6 GHz Intel Xeon Sandy Bridge
  • Interconnect: Infiniband QDR
  • Production: 3Q 2012

Systems

NCCS User Services Group

support@nccs.nasa.gov
301-286-9120
301-286-1634 (Fax)

Hours of Operation:
Monday through Friday
8 a.m. to 6 p.m.
Eastern Time (U.S.)

+ NASA's HEC Program
FirstGov logo + NASA Privacy, Security, Notices
+ Sciences and Exploration Directorate
+ CISTO
NASA Curator: Mason Chang
NCCS User Services Group (301-286-9120)
NASA Official: Dan Duffy, High-Performance
Computing Lead, GSFC Code 606.2