// Discover SCU Hardware Specifications

The following page contains hardware specifications for each of the Scalable Units, in the Discover cluster.

Scalable Unit 10

  • Manufacturer: SGI
  • 1,229 Tflop/s
  • 30,240 Intel Xeon Haswell processor cores
  • SGI C2112 Compute Nodes
  • 2 14-core processors per node
  • 128 GB of memory per node
  • No Swap Space
  • 2.6 GHz Intel Xeon Haswell
  • Interconnect: Infiniband FDR
  • Production: 1Q 2015

Scalable Unit 11

  • Manufacturer: SGI
  • 683 Tflop/s
  • 17,136 Intel Xeon Haswell processor cores
  • SGI C2112 Compute Nodes
  • 2 14-core processors per node
  • 128 GB of memory per node
  • No Swap Space
  • 2.6 GHz Intel Xeon Haswell
  • Interconnect: Infiniband FDR
  • Production: 1Q 2015

Scalable Unit 12

  • Manufacturer: SGI
  • 683 Tflop/s
  • 17,136 Intel Xeon Haswell processor cores
  • SGI C2112 Compute Nodes
  • 2 14-core processors per node
  • 128 GB of memory per node
  • No Swap Space
  • 2.6 GHz Intel Xeon Haswell
  • Interconnect: Infiniband FDR
  • Production: 2Q 2015

Scalable Unit 13

  • Manufacturer: SGI
  • 723 Tflop/s
  • 18,144 Intel Xeon Haswell processor cores
  • SGI C2112 Compute Nodes
  • 2 14-core processors per node
  • 128 GB of memory per node
  • No Swap Space
  • 2.6 GHz Intel Xeon Haswell
  • Interconnect: Infiniband FDR
  • Production: 2Q 2016

Scalable Unit 14

  • Manufacturer: Supermicro
  • 1,560 Tflop/s
  • 20,800 Intel Xeon Skylake processor cores
  • 520 Supermicro FatTwin nodes
  • 2 20-core processors per node
  • 192 GB of memory per node
  • No Swap Space
  • 2.4 GHz Intel Xeon Skylake
  • Interconnect: Intel OmniPath
  • Target: Q2 2018

Scalable Unit 15

  • Manufacturer: Aspen Systems Inc
  • 1,920 Tflop/s
  • 25,600 Intel Xeon Skylake processor cores
  • 640 Supermicro TwinPro nodes
  • 2 20-core processors per node
  • 192 GB of memory per node
  • No Swap Space
  • 2.4 GHz Intel Xeon Skylake
  • Interconnect: Intel OmniPath
  • Target: Q3 2019

Base Unit
(Decommissioned: September 2011)

  • Manufacturer: Linux Networx/SuperMicro
  • 3.33 Tflop/s
  • 520 Total Cores
  • 2 dual-core processors per node
  • 4 GB of memory per node
  • 3.2 GHz Intel Xeon Dempsey (2 flop/s per clock)
  • Interconnect: Infiniband DDR
  • Production: 4Q 2006

Discover Scalable Units 1 and 2
(Decommissioned: 2015)

  • Linux Networx/Dell Custom Supersystem
  • 516 nodes (dual socket, dual core)
  • 2,064 Intel Xeon Woodcrest (2.66 GHz) processor cores
  • 1 gigabyte per core (4 gigabytes per node) = 2.064 terabytes of memory
  • 20 gigabit-per-second InfiniBand DDR network
  • 21.96 teraflops peak
  • Production: 2 & 3Q 2007

Discover Scalable Units 3 and 4
(Decommissioned: 2015)

  • IBM iDataPlex
  • 516 nodes (dual socket, quad core)
  • 4,128 Intel Xeon Harpertown (2.5 GHz) processor cores
  • 2 gigabytes per core (16 gigabytes per node) = 8.256 terabytes of memory
  • 20 gigabit-per-second InfiniBand DDR network
  • 41.28 teraflops peak
  • Production: 3 & 4Q 2008

Scalable Unit 1+
(Decommissioned: February 2015)

  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3,096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Upgraded: 4Q 2011

Scalable Unit 2+
(Decommissioned: February 2015)

  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3,096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Upgraded: 4Q 2011

Scalable Unit 3+
(Decommissioned: January 2015)

  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Upgraded: 3Q 2011

Scalable Unit 4+
(Decommissioned: January 2015)

  • Manufacturer: IBM
  • 34.7 Tflop/s
  • 3096 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 hex-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Westmere (X5660)
  • Interconnect: Infiniband DDR
  • Upgraded: 3Q 2011

Scalable Unit 5
(Decommissioned: June 2013)

  • Manufacturer: IBM
  • 46.23 Tflop/s
  • 4,128 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 quad-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Nehalem (4 flop/s per clock)
  • Interconnect: Infiniband DDR
  • Production: 3Q 2009

Scalable Unit 6
(Decommissioned: June 2013)

  • Manufacturer: IBM
  • 46.23 Tflop/s
  • 4,128 Total Cores
  • IBM iDataPlex Compute Nodes
  • 2 quad-core processors per node
  • 24 GB of memory per node
  • 2.8 GHz Intel Xeon Nehalem (4 flop/s per clock)
  • Interconnect: Infiniband DDR
  • Production: 1Q 2010

Scalable Unit 7
(Decommissioned October 2014)

  • Manufacturer: Dell
  • 161.3 Tflop/s
  • 14,400 Total Cores
  • Dell C6100 Compute Nodes (1200 nodes)
  • 2 hex-core processors per node
  • 24 GB of memory per node (2GB per core)
  • 2.8 GHz Intel Xeon Westmere
  • Interconnect: Infiniband DDR
  • Production: 2Q 2011

Scalable Unit 8
(Decommissioned: March 2016)

  • Manufacturer: IBM
  • 606 Tflop/s
  • 7,680 Intel Xeon Sandy Bridge processor cores
  • IBM iDataPlex Compute Nodes
  • 480 Intel Many Integrated Core (Phi) co-processors
  • 2 oct-core processors per node
  • 32 GB of memory per node (2GB per core)
  • 2.6 GHz Intel Xeon Sandy Bridge
  • Interconnect: Infiniband QDR
  • Production: 3Q 2012

Scalable Unit 9
(Decommissioned: April 2019)

  • Manufacturer: IBM
  • 91 Tflop/s
  • 4,480 Intel Xeon Sandy Bridge processor cores
  • IBM iDataPlex Compute Nodes
  • 2 8-core processors per node
  • 64 GB of memory per node (4GB per core)
  • No Swap Space
  • 2.6 GHz Intel Xeon Sandy Bridge
  • Interconnect: Infiniband FDR
  • Production: 3Q 2013