Skip all navigation and jump to content Jump to site navigation Jump to section navigation.
NASA Logo - Goddard Space Flight Center + Visit NASA.gov
NASA Center for Climate Simulation
NCCS HOME USER SERVICES SYSTEMS DOCUMENTATION NEWS GET MORE HELP

 

News

What's New?



10/17/2013: Return from Government shutdown

The NCCS is actively returning to operations at this time. We will provide additional notification by email as systems become available.

Barring any unforeseen issues, we expect that all systems and services will be fully restored by the close of business today (Thursday, October 17).

We appreciate everyone's patience and look forward to continuing our support of your science activities.

08/15/2013: Introduction to Intel VTune Amplifier XE 2013 Webinar

Webinar Date/Time: Wednesday August 21, 1:30 pm to 3:30 pm

Gary Carleton from Intel will present a VTune Webinar for interested NCCS users. VTune can be used for both Phi and Xeon code performance analysis.

Agenda
1. Types of Analysis
a. Hotspots Analysis
b. Concurrency/Locks and Waits Analysis
c. CPU Event Based Analysis
2. Advanced Features
3. Command Line Analysis
4. Remote Data Collection
5. Result Comparison
6. What's New in the latest release
To connect to the Webinar, please use the following:
https:/www.teleconference.att.com/cscglobal
Meeting number and dial-in number: 888 232 6885 Access #: 5692834

For those who would prefer not to dial in from their office, we have reserved room H118 in Building 33.

Feel free to contact the NCCS User Services Group via phone at 301-286-9120 or email at support@nccs.nasa.gov.



07/10/2013: Emergency Network Switch Maintenance Completed

The emergency downtime for the network switch maintenance has been completed. Normal system availability has been resumed. We thank you again for your patience and understanding.



07/10/2013: Today's Emergency Network Switch Maintenance 11:30 EST - noon

The NCCS has determined that the current networking event has degraded to the point where we feel the best option is to take intrusive action. This decision is being made to ensure that an unintended outage, having the most potential for harm, is avoided. The emergency action will be performed today starting at 11:30 EST and should last roughly 30 minutes. We will send out additional updates as the emergency action progresses and completes.



07/05/2013: NCCS Degraded Network Condition

The NCCS is currently experiencing a degraded network condition that is causing higher than normal latencies. This condition has the effect of causing initial attempts to access various NCCS services to fail although successive attempts will likely succeed. The systems that are impacted will be random as will the timing and duration of the individual system's impact windows. Jobs running on the Discover compute cluster have not been observed as being impacted by the current condition. The NCCS has compiled options to attempt to restore service while still keeping the network online in support of the current missions. Further communication will be provided in the event the network degrades to the point where intrusive action is required.



06/10/2013: Decommission of Discover SCU5/SCU6 in preparation for SCU9 installation.(Update)

As of June 10, 2013, the Discover cluster’s Nehalem nodes are no longer available for use.

Please discontinue use of 'proc=neha' in PBS job scripts and on the qsub command line, because jobs requesting the Nehalem nodes will no longer run.

The NCCS is removing Discover’s Nehalem nodes (SCUs 5 and 6) to make room for Discover’s newest augmentation, SCU9, with hardware deliveries that are beginning today (June 10, 2013). SCU9 will have 480 2.6-GHz 16-core Sandy Bridge nodes (similar to SCU8). There will be a period of several weeks during which Discover’s computational resources will be somewhat reduced, prior to the full implementation and availability of SCU9.

Thank you for your patience as we continue to enhance the Discover cluster.



05/30/2013: Decommission of Discover SCU5/SCU6 in preparation for SCU9 installation.

The NCCS is preparing for the next step in the Discover cluster's evolution. We expect to start receiving the hardware for our newest scalable unit, SCU9, in mid-June. SCU9 will consist of 480 2.6 GHz 16-core Sandy Bridge nodes (similar to SCU8). To make room for SCU9, we are retiring SCU5 and SCU6 (our initial Nehalem technology nodes).

We will begin decommissioning parts of SCU5 and SCU6 immediately, with full SCU5/SCU6 shut down no later than June 14. There will be a period of several weeks where Discover's compute resources will be diminished prior to full implementation and availability of SCU9.

Users currently specifying 'proc=neha' in their batch jobs will need to update this reference, since there will no longer be Nehalem processors available once SCU5 and SCU6 are retired.

Thank you for your patience as we continue to enhance the Discover cluster.



02/04/2013: The October 2012 NCCS User Survey results are now available.

Thank you to all who took the time to provide feedback during the NCCS User Survey last October. We have analyzed your feedback and are pursuing initial actions based on that feedback. Some of those actions have already been implemented.

We intend to present a status update during the upcoming February User Forum. In the interim, please click on the following link to see a summary of the survey results.

Thank you for your support and, again, special thanks to those who provided feedback that will help us help you.



11/16/2012: Discover ranked 53 on Top500 and 31 on Green500

The most recent editions of the Top500 list (world's top supercomputers) and Green500 list (world's most energy-efficient supercomputers) were released during the week of November 12th in conjunction with the 25th International Supercomputing Conference, SC12. Discover appeared on both of those lists as a result of recent testing performed on our newest scalable computing unit, SCU8.

Linpack performance tests run on 468 SCU8 nodes performed at a rate of 417.3 Tflops/s, placing it 53rd on the Top500 list. At 1,935 Mflops per Watt, SCU8 also ranked 31st among the world's most energy-efficient supercomputers.



11/01/2012: NCCS Status Update

All NCCS computing systems are available including JIBB as of 3:10 p.m.

We apologize for the inconvenience this may have caused.



10/31/2012: NCCS Status Update

All NCCS computing systems are available except JIBB as of 7:45 p.m.

The Jibb system remains offline while we wait for the delivery of critical parts from the vendor. We expect to have Jibb available by close of business tomorrow, November 1, 2012.

We apologize for any the inconvenience that our system unavailability may cause you.



10/31/2012: NCCS Status Update

This is a status update regarding the availability of NCCS computing systems following the storm resulting in the Code Red condition at the NASA Goddard Space Flight Center. A number of NCCS computing systems are currently available including the DataPortal, the Archive on dirac and all NCCS infrastructure systems. We expect to have discover available by the end of today and hope to have the jibb system available by mid-afternoon.

We apologize for any the inconvenience that our system unavailability may cause you.



10/28/2012: NCCS Downtime Status Update

GSFC Facilities Maintenance Division has completed weekend repairs to a damaged power feeder. However, in anticipation of the major storm affecting the region, we are leaving all NCCS systems down until further notice.

The most severe storm impact is expected to hit the Washington DC area Monday afternoon/evening. If GSFC is not under a Code Red status on Tuesday, October 30, we will send a status update by 10 AM EDT that day.

If GSFC is under a code Red status on Tuesday, October 30, we will send a status update by 10 AM on the first day after the Code Red status is downgraded.



09/17/2012: Workshop - Advanced Debugging with TotalView - 09/17, 2 - 4 p.m. Bldg. 33, Rm. H118

What: Workshop  Advanced Debugging with TotalView
When: September 17 2-4 PM
Where: Building 33, Room H118

The NCCS has just been afforded an opportunity to leverage a visit on Monday, September 17 by Ed Hinkel, a technical expert from Rogue Wave Software, makers of TotalView.

Ed will share first-hand knowledge of debugging technology, and will be available for discussion and attendee questions. Workshop topics include:

Unattended batch debugging with TVScript Debugging at scale with subset debugging CUDA GPU debugging



09/07/2012: SED Director's Seminar hosted by Code 610 on Fri. Sept. 7, 12 - 1 p.m.

The SED Director's Seminar hosted by the Earth Sciences Division will be held this Friday, Sept. 7, 12 - 1 p.m. The location of the seminar will be in Bldg. 33, Rm. H114.

Enabling Earth Science on the Supercomputer

  • Bill Putman and Siegfried Schubert: GEOS-5 high resolution simulations and seasonal climate predictions
  • Sujay Kumar: High-resolution Land surface modeling, data assimilation and land-atmosphere interaction studies on the NCCS cluster


08/31/2012: Discover operating system upgrade, SLES 11 SP1 available for user testing

As mentioned in earlier communications, the NCCS is preparing to upgrade the Discover cluster's operating system to SLES 11 SP1.

We have provisioned 240 Westmere nodes and 130 Nehalem nodes with this version of the OS, and are now ready for user testing.

To submit batch work to nodes running under the new OS, please use the 'sp1' queue. The 'proc=' PBS directive functions as expected in this queue, and can be utilized if specific node types are desired.

Our internal testing shows that recompiling is not required, but we recommend doing so nevertheless.



08/14/2012: Discover operating system upgrade, SLES 11 SP1 soon to be available for user testing

Greetings Discover Users! The following changes have been highlighted for your consideration as we move forward with the new SLES 11 SP1 image for the Discover and Dali environments.

Library / Version SLES 11 SLES 11 SP1
Kernel
2.6.27.54-0.2-default
2.6.32.54-0.3-default
glibc
2.9-13.11.1
2.11.1-0.34.1
dapl
2.0.30-1
unchanged
dapl-compat
1.2.19-1
unchanged
gcc
4.3.2
4.3.4
libverbs
1.1.4-0.14
unchanged



07/31/2012: Simulated Nature Runs Its Course

A NASA Goddard climate model called GEOS-5 revisited the extraordinary 2005 Atlantic hurricane season as part of a gigantic two-year simulation run on Goddard's Discover supercomputer.



07/10/2012: NCCS Discover Update

The Discover cluster is available. All $NOBACKUP directories are available. All but one disk rebuild complete from Center-wide power outage.



07/9/2012: Climate Data Analysis and Visualization using UVCDAT brown bag Presentation

Date: 07-12-2012
Time: 2 p.m. - 3 p.m.
Venue: Building 33, Room H114

Dial-in number: 866-903-3877
Participant code: 6684167#

On Thu., Jul. 12, at 2 pm in Building 33, Room H114, climate scientist Jerry Potter and analysis/visualization expert Tom Maxwell (both 606.2) will demonstrate how climate scientists can use the open-source UVCDAT tools to explore and analyze climate model output, such as the NetCDF-formatted model output files produced by GEOS-5 system and MERRA.

The UVCDAT tools feature workflow interfaces, interactive 3D data exploration, automated provenance generation, parallel task execution, and streaming data parallel pipelines, and can enable hyperwall and stereo visualization.

For scientists and engineers, generating and evaluating hypotheses is an interactive process. With each change, a different, albeit related, workflow is created. UVCDAT was designed to manage these rapidly-evolving workflows. By automatically managing the data, metadata, and the data exploration process, UVCDAT allows you to focus on the task at hand and relieves you from tedious and time-consuming tasks involved in organizing vast volumes of data. UVCDAT provides infrastructure that can be combined with and enhance existing visualization and workflow systems.

The Ultrascale Visualization Climate Data Analysis Tools (UVCDAT) is the new Earth System Grid analysis framework designed for for climate data analysis, and it combines VisTrails, CDAT and ParaView.

Some helpful urls:

Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) http://uvcdat.llnl.gov/

Interactive 3D data exploration (vtDV3D) http://portal.nccs.nasa.gov/DV3D/

VisTrails http://www.vistrails.org/

ParaView - Open Source Scientific Visualization http://www.paraview.org/



07/5/2012: NCCS Discover Update

The Discover cluster is available as of 19:00.

All $NOBACKUP directories are available at this time. Some disk rebuilds are still proceeding in the background, so I/O performance will be degraded for some filesystems for the next 24 to 48 hours. Batch queues are being restarted and will be fully operational shortly.

The /discover/TMP_SCRATCH filesystem has been mounted Read-Only. Users should copy any data that they created in this area to their $NOBACKUP directories. /discover/TMP_SCRATCH will remain mounted through the weekend, but will be unmounted next week. Please copy your data off of this area as soon as possible.

Thank you for your patience as we worked to recover services following the center-wide power outage.



07/3/2012: Limited Discover/Dali functionality now available

While NCCS system administrators continue recovery efforts from Saturday’s Center-wide power outage, we have established a limited functionality option for users who cannot wait for the disk rebuild process on the Discover/Dali cluster to complete.

All compute, login, dali, and gateway (datamove) nodes are now available. Access to the PBS job scheduler has also been restored. Home filesystems are accessible but $NOBACKUP filesystems are NOT available for most users at this time. For those users, a temporary scratch filesystem is available via:

/discover/TMP_SCRATCH/$USER

There will be no quotas on this filesystem, but read/write access will be available only until normal nobackup filesystems are restored. At that time, this temporary area will be remounted as read only (current estimate is Thursday afternoon). It will then be removed one week later. This area WILL NOT BE BACKED UP, so please use it at your own risk.

Thank you for your patience and understanding as we continue our recovery efforts. The NCCS User Services Group will be unreachable during the 4th of July holiday, but upon our return we will respond to email sent to support@nccs.nasa.gov and will again be reachable via phone at 301-286-9120.



07/2/2012: PEPCO Power Outage Affects All NCCS Access

All NCCS systems are currently unavailable due to a center-wide power outage that began over the weekend. Although power has been restored to the center, it will still be quite some time before NCCS systems become available for users. We have begun verifying system and infrastructure components. We expect the remaining verification activities to take much of the day today. Once we have completed the verification effort, we will begin a methodical start-up of NCCS systems.

Due to the fluid nature of the recovery effort, we will hold a special user teleconference at 13:30 to provide a status update. The dial-in number for this phone conference is 1-866-903-3877, and the participant code is 6684167#.

Although we will continue to provide additional information as it becomes available, please feel free to contact the NCCS User Services Group at 301-286-9120 or email us at support@nccs.nasa.gov



06/25/2012: NCCS Discover Upgraded PBS from version 10 to 11

On Monday, June 25 NCCS personnel upgraded the PBSPro batch job scheduler used on the Discover cluster from version 10 to version 11. The aim of this upgrade was to implement a recently received fix to a longstanding slow start-up of the batch subsystem.

We are happy to report that the patch provided has addressed the slow start-up issue, and was tested successfully (multiple times) during the June 27th maintenance window.

We apologize for the inconvenience caused by this challenging issue and appreciate your patience and understanding.



05/3/2012: New DMF Mass Storage Default: One Tape Copy

As of May 31, 2012, the NASA Center for Climate Simulation's DMF system default will change to make only one tape copy of files. The reason for this change in policy was stated in the 3/20/2012 user forum. The rate of data growth has outpaced the available budget for the NCCS DMF system. This fact required the NCCS to find new ways to reduce data holdings on tape to stretch existing resources further and still meet the demand for new data being stored in DMF.

While the new default will be a single tape copy, a user may decide a given file is important enough to warrant a second copy. In order to designate files that require two DMF tape copies, the user must use the dmtag command.

To tag a file for two DMF tape copies, issue the following command from dirac, dali, a discover login node or a job in the discover datamove queue:

% dmtag -t 2 <file_in_DMF_archive>

To see the tag of a file, issue this command:

% dmtag <file_in_DMF_archive>
2  <file_in_DMF_archive>

Specifying anything other than 2 with the -t option will tell DMF to make one tape copy. By default a file's tag is set to 0. Using the dmtag command on a file will not recall the file from tape. The dmtag command will tag multiple files as well. Issue the (man dmtag) command for more information.

After one tape copy becomes the default, the second tape copy of all existing DMF managed files shall be removed unless the file's tag is set to 2. Therefore users are encouraged to dmtag all files that require two copies.

While using dmtag if there are files that are no longer needed, please remove them from DMF.

DMF Migration Process

DMF periodically scans each cache file system looking for new files to manage. New files are defined as:

  • Files that are new to the DMF system (not DMF managed)
  • DMF managed files whose content is changed

A file is considered DMF managed after it is run through the DMF migration process and is added to the DMF file database.

Once files are selected for migration to offline storage, the migration process is run which applies a predefined set of migration rules to the files. The primary purpose of the migration rules is to determine where to write the files and how many copies to create. Currently files are only migrated to tape storage. The migration rules are only applied automatically to a file when it is first managed by DMF. If a file's dmtag changes, the NCCS plans to reapply the migration rules to the file to maintain the requested number of tape copies.

More Information

Please contact NCCS User Services if you have questions or concerns or need assistance with the dmtag utility: support@nccs.nasa.gov or 301-286-9120. Please provide the following information:

  • Absolute pathnames of the files being dmtag'd
  • The system on which the dmtag command was issued
  • The dmtag command output received


04/23/2012: Mass Storage Archive Tips & Tricks brown bag Presentation

Date: April 26th 2012
Time: 1:30 p.m. - 2:30 p.m.
Venue: Building 32, Room N131

George Fekete (Code 606.2) will describe optimal ways to use the NCCS's multi-petabyte mass storage archive, which is configured for easy access to large amounts of data that appear to be on disk, while most files actually reside on tape.. George will discuss simple approaches to improve archive storage and retrieval and avert problems.

The NCCS provides its users with a multi-petabyte mass storage archive based on high-density tapes. The system is set up to make it easy for users to access their archived storage, especially from the NCCS's large high-performance Linux cluster, but this transparency and flexibility come at a price. Anecdotal evidence suggests that some users have experienced frustration regarding the apparent performance of the archives. This hour of discussion aims at describing the system in enough detail to help users have an improved experience in the future. Sample scenarios leading to bad situations are introduced, and simple remedies to avert them are revealed.



04/09/2012: Intel Many Integrated Core (MIC) brown bag Presentation

Date: April 17th 2012
Time: 12:00 p.m. - 1:00 p.m.
Venue: Building 33, Room H114

Recently Intel released a prototype MIC ("Many Integrated Core") system designed specifically for highly parallel, highly vectorized code. The system, code named Knights Ferry has been evaluated by NCCS/SSSO. Since the system is only a prototype and not a consumer product, our focus was on functionality rather than performance. In this brown bag, we will go over the key features of the MIC and its programming models. We will also share our experience on porting an existing code to the system.



03/26/2012: 2012 NASA Goddard IS&T Colloquia Lecture Series on 03/28/2012

Cloud Computing at NASA: More Science in the same Budget.
Michael Little
NASA Langley Research Center
Wednesday, March 28, 2012 
Building 3 Auditorium – 11:00 AM
(Coffee and cookies at 10:30 AM)

The NASA Nebula system was hailed as a revolutionary approach to supplying computing capabilities to the Agency. Would it work to support the science and engineering computing needs of the Mission Directorates? What characteristics would lead an application to running on the Cloud and which ones make it a poor match? After 5 months of intensive testing by over 100 NASA researchers, what did we learn? Inquiring minds want to know...



03/12/2012: NCCS Notice: NCCS Bastion Service change

On 03/26/12, direct SSH access to the NCCS Dali and Dirac nodes will be turned off, in order to comply with NASA and Code 600 security requirements.

Users will still be able to access Dali, Dirac and Discover through the NCCS Bastion Service. This mechanism allows for command line access, and has the capability to facilitate file transfer.



02/08/2012: NCCS Primer: A User's Guide

The NCCS has been updating our user documentation, and we are pleased to announce the availability of a new NCCS Primer. Key sections of the primer include Getting Started, Computing, Your Data, and Software. User feedback is greatly appreciated.



1/26/2012: The NCCS has completed the planned upgrade of Discover's scalable units 1 and 2 (SCU1, SCU2)

The NCCS has completed the planned upgrade of Discover’s scalable units 1 and 2 (SCU1, SCU2) from Woodcrest technology to the newer Westmere technology. These new nodes are accessible via the normal Discover PBS queue structure. As is the case for our previous Westmere nodes (in scalable units 3, 4, and 7), the new nodes in SCU1 and SCU2 contain 12 cores (dual-socket, hex core) and 2 GB per core of memory (24 GB per node). These new nodes bring the Discover compute node count to 3,394 and the associated core count to 35,560. Discover’s peak computing capacity is now 395.8 TFLOPS.

If there any questions, please don’t hesitate to contact the User Services Group.



1/26/2012: NCCS Notice: Expired password change

NCCS systems will properly enforce expired password changes starting Thursday afternoon (2012-01-26), per NASA policy. The systems will automatically begin the password change process upon login with an expired password. You will still be provided expiration countdown notices during login and email notices several days beforehand. As always, you can change your password before it expires but if you forget the systems will prompt you to do so.

The dialog you should expect during expired password change is:

Password: *old*  (Initial login with expired password)
You are required to change your LDAP password immediately.
Enter login(LDAP) password: *old-password*
Password: *new-password*
Retype new password: *new-password*
LDAP password information changed for *AUID*

As a reminder, the NASA password policy requirements are:

* contains at least 12 characters, matching at least three of the following:
    minimum of 1 digit;
    minimum of 1 uppercase letter;
    minimum of 1 lowercase letter;
    minimum of 1 special character;
* is valid for maximum of 60 days;
* may not be reused for 24 cycles;
* may not be changed again until after 24 hours has passed

If there any questions, please don’t hesitate to contact the User Services Group.



11/28/2011: 2011 NASA Goddard IS&T Colloquia Lecture Series on 11/30/2011

The New Cluster Paradigm for Exascale Computing
Thomas Sterling
Professor of Informatics and Computing
Indiana University
The New Cluster Paradigm for Exascale Computing
Building 3 Auditorium 
11:00 AM  (Coffee and cookies at 10:30 AM)
November 30, 2011

... MPI and its underlying Communicating Sequential Processes (CSP) execution model are no longer sufficient or even appropriate for parallel application programming of future clusters ... This presentation identifies the challenges facing commodity cluster architecture and programming and describes a new execution model, ParalleX, which may provide a strategy for addressing them ...



11/18/2011: The NCCS has added a new group of analysis nodes to the Discover environment.

The NCCS has added a new group of analysis nodes to the Discover environment. These nodes contain 12 Westmere processors, 192Gb of memory, and attached Tesla GPUs. These nodes will soon be added to the general "dali" alias, but they are currently available via the alias "dali-gpu". Please feel free to begin using these additional analysis nodes. Additional details follow.

Access via ssh/sftp/scp from remote can be done via "dali-gpu.nccs.nasa.gov" directly or ssh to "dali-gpu" from existing login/dali nodes.

Access via ssh from remote can also be completed via login.nccs.nasa.gov and selecting "dali-gpu" as the host.

All of the same software and filesystems available on the current dali nodes are also available on the dali-gpu nodes (including home directories, nobackup, /usr/local and other GPFS filesystems). All of the same NFS volumes available to the dali nodes (from dataportal and the archive) are available as well.

Each system is a dual socket, hex-core node (12 cores total) with 192GB of memory. Each node also has two Tesla M2070 GPGPUs attached (for use with CUDA if needed).

The general info regarding using dali nodes on the NCCS website still applies:

Info on using CUDA can also be used from the NCCS website:

The major difference with using CUDA on these nodes is users DO NOT need to submit work to the warp queue to access the GPGPUs. They are directly attached. These new dali nodes are in fact the same model hardware as the current warp nodes just without the need to use PBS to run work and these nodes have 192GB of memory where the standard warp node only has 48GB of memory.

If there any questions, please don’t hesitate to contact the User Services Group.



09/22/2011: Discover Upgrade to SLES11 and PBS10 on 09/29/2011

There will be a Discover operating system upgrade starting Thursday, 9/29. There will be no SLES 10/PBS 8 nodes available after this date. No new batch work will be accepted in the SLES 10/PBS 8 environment as of 20:00 Wednesday, 9/28/11. Please see the news item below dated 8/26/11 for specific details regarding differences between the SLES 10/PBS 8 and the SLES 11/PBS 10 environments.

Cron services will also be migrated from the SLES 10 environment to the SLES 11 environment on 9/29/11. Users are reminded that they should access Discover’s cron services via the alias “discover-cron” (and NOT via discover01).

The following resources will remain available for the duration of the upgrade:

  • Discover "west" nodes-- SCU3, SCU4, SCU7 (total ~1700 nodes/20,500 cores)
  • Special-purpose queues and all regular queues on SCU3, SCU4, SCU7 (via SLES 11 and PBS 10)
  • Dali nodes that have already been migrated to SLES 11
  • Dirac mass storage and /archive
  • Dataportal systems

Discover and Dali resources that will NOT be available starting September 29:

  • All "neha" SCU5 and SCU6 nodes (~1000 nodes/8000 cores)
  • All "wood" SCU1 and SCU2 nodes (~512 nodes/2048 cores)
  • All "demp" Base Unit nodes (~128 nodes/512 cores)
  • Dali nodes that are not yet at SLES 11

Environment changes starting September 29:

  • Only SLES 11 (and PBS version 10) will be available on discover and dali.
  • The 'discover' alias will only point to SLES 11 login hosts.
  • The 'discover-sles11' and 'discover-test' login aliases will go away.
  • The 'dali' alias will only point to SLES 11 dali nodes.

Work starting September 29 and completed in several days:

  • SCU5 and SCU6 "neha" nodes will be migrated to SLES 11. We expect this process to take two or three days, and we plan to progressively return these nodes to service as we upgrade them.
  • SLES10 dali nodes (dali01-dali05) will be migrated to SLES 11 and brought back online as we upgrade them.

Work starting September 29 and completed in several weeks or more:

  • SCU1 and SCU2 Woodcrest nodes are being upgraded to Westmere nodes (and migrated to SLES 11). The first set of upgraded nodes is expected to be available for users in November.
  • The Dempsey nodes will be unavailable for several weeks, during which time we expect to migrate those 128 nodes to SLES 11 as time permits while work progresses on the other ~1500 nodes.

If you are unsure how this upgrade will affect you, please contact the User Services Group at 301-286-9120 or at support@nccs.nasa.gov.



09/15/2011: Dali SLES 11 nodes available

The NCCS has upgraded some of the Dali analysis nodes to SLES 11 for user testing. Users may access these nodes by connecting to login.nccs.nasa.gov and requesting "dali-sles11" when prompted for a host. These analysis nodes are part of the SLES 11 / PBS 10 environment.

As noted in earlier e-mails, when submitting batch work to PBS 10 from these analysis nodes, you will need to modify your PBS select statement to include the mpiprocs=## directive in order to have your MPI processes distributed properly across the compute nodes. The mpiprocs=## directive should be included *in addition* to the ncpus=## directive.

The NCCS plans to disable access to the older, SLES 10 login and analysis nodes on September 29th. At that time, the aliases for "discover" and "dali" will be changed to point to the SLES 11 login and analysis nodes, and the NCCS will convert the remaining SLES 10 interactive nodes to SLES 11.

In conjunction with the conversion of the interactive nodes to SLES 11, the remaining SLES 10 Discover compute nodes will also be disabled for conversion to SLES 11. The Dempsey, and NeHalem nodes will be upgraded and returned to service as soon as possible. We expect this process to take a couple of days and we plan to progressively return the nodes to service as their upgrades are completed.

The Woodcrest nodes will not be upgraded to SLES 11, as they will be retired and their hardware upgraded to Westmere processors. We will provide a more detailed timeline for this hardware upgrade at a later date.

All users are encouraged to connect to "discover-sles11" or "dali-sles11" and begin migrating their work to that environment as soon as possible. Please contact the NCCS if you have any questions, need assistance, or encounter any problems in the Discover SLES 11 environment.



08/26/2011: SLES 11 and Westmere nodes available on Discover

For the past few months, the NCCS has been temporarily restricting a portion of Discover Scalable Compute Unit 7 (SCU7) for use by large research runs (per an earlier communication). We have removed that restriction and are inviting all Discover users to begin using the Westmere nodes of SCU7, as well as the recently upgraded nodes of SCU3 and SCU4. Please note that the 12 core Westmere nodes in SCU3, SCU4 and SCU7 are utilizing a newer operating system, kernel, and libraries than the rest of Discover (SLES 11 vs. SLES 10). Users should recompile their codes under the SLES 11 environment to ensure that all the correct libraries are being used.

Also note that the SLES 11 environment is being managed by a newer version of PBS than the rest of Discover. You will need to modify your PBS select statement to include the mpiprocs=## directive in order to have your MPI processes distributed properly across the nodes. The mpiproc=## directive should be included *in addition* to the ncpus=## directive.

For example, instead of:

-l select=10:ncpus=12
(to request 10 nodes, with 12 cores per node)
you would now use:

-l select=10:ncpus=12:mpiprocs=12
(to request 10 nodes, 12 cores per node, and 12 MPI procs per node)

OR, to request less than 12 mpiprocs per node:

-l select=10:ncpus=12:mpiprocs=10
(to request 10 nodes, 12 cores per node, and only 10 MPI procs
per node)
This select format may change in the future as we continue to refine the
PBS configuration.  We will notify you if it does.

At this time, the SLES 11 environment consists entirely of Westmere nodes. Because of this, the proc=west directive is not strictly required as there are no other processor types available in this environment. Users are reminded that they may use the proc=<type> directive to request a specific type of processor. Until the remaining compute units are migrated to the SLES 11 / PBS 10 environment, specifing anything other than proce=west will result in a job that will never run.

To access the Westmere nodes in the SLES 11 environment, connect to login.nccs.nasa.gov and request "discover-sles11" when prompted for a host. This will place you on one of the SLES11 login nodes we have established for the SLES 11 / PBS 10 environment.

Note that several Intel MPI versions are deprecated in this environment:

mpi/impi-3.1.038
mpi/impi-3.2.011

Attempting to load these modules will result in an error, with the suggestion to use a newer module. (These deprecations are due to changes in the version of python in SLES11 from SLES10.)

In general, PBS scripts that work on discover will also work in the SLES 11 / PBS 10 environment without any problem or modification. The same goes for executables. The only exception is that executables that depend on glibc will need to be rebuilt. A particular example of this includes executables built with the PGI compilers. However, as mentioned above, we recommend that all codes to be run under SLES 11 should be recompiled under SLES 11 prior to execution there.



05/12/2011: Graphical Processing Units (GPUs) Available on Discover

The NCCS is pleased to announce the addition and availability of a limited number of Discover compute nodes with Graphical Processing Units (GPUs). For certain applications, GPUs can provide a significant speed up, but their use may require some application recoding or even algorithmic changes.

Users interested in taking advantage of these nodes are encouraged to contact NCCS User Services Group by e-mail at support@nccs.nasa.gov.

In your request to access these resources, please provide a brief description of the application for which you are interested in enhancing performance.

Due to the limited number of GPUs available, the NCCS and SIVO will review all requests to ensure applicability.



04/26/2011: With respect to moving files outside of the NCCS

With respect to moving files outside of the NCCS, please remember that anything stored under /archive is not local to discover/dali. These file systems are mounted via NFS (network) on the discover/dali nodes. If you need to move data in or out of the NCCS environment to a remote site and the data is stored under /archive, please log into "dirac" (dirac.nccs.nasa.gov) directly, as that is the only set of systems that has direct access to the file systems that make up /archive.

When you move (scp/sftp) data in or out of /archive on a discover/dali node (i.e.: to or from a system external to the NCCS), you actually degrade your performance since to read data under /archive, the data has to be read over the network first, cached locally on the dali/discover node, and then re-written back out over the network via scp/sftp. If you go directly to dirac to move your data, you bypass this double access to the network because on dirac, the /archive file systems are local, so the data is read from the file system and sent directly to scp/sftp and onto the network to your remote location.



04/22/2011: NCCS Moving to a Common Standard Billing Unit May 1, 2011

NCCS is moving to a common Standard Billing Unit (SBU) for allocating and tracking computing time usage. The new SBUs will appear in usage reports starting on May 1, 2011.



04/19/2011: SCU7 availability for large research runs

The latest addition to Discover (SCU7) is now available. We are conducting a trial period through July 1, 2011 during which SCU7 will be made available only for large (i.e., over 1000 cores) research runs. Trial- period usage will not count against your computational allocation.

If you are interested in taking advantage of this opportunity, please contact the NCCS User Services Group by e-mail at support@nccs.nasa.gov or by telephone at 301-286-9120.



01/11/2011: The NCCS has acquired a new high performance computing resource dedicated to the JCSDA.

An IBM Linux Cluster powered by 572 Intel Westmere processors and featuring 200 TB of storage was delivered to the NCCS on October 25, 2010.



12/23/2010: NCCS Matlab Use Survey

The NCCS usage of the available Matlab licenses and toolboxes has been increasing over the last six months. We would like to take this opportunity to learn more about your current and planned Matlab usage in order to identify the best way to utilize our limited resources.

We ask that you take a moment to answer to the best of your ability the following survey in order to allow us to plan efficiently.

Survey is located on the NCCS Web-site at the following Link:



10/14/2010: NASTRAN User Training



11/09/2010: Supercomputing Conference Highlights NASA Earth, Space Missions

NASA will showcase the latest achievements in climate simulation, space exploration, aeronautics engineering, science research and supercomputing technology at the 23rd annual Supercomputing 2010 (SC10) meeting.



10/14/2010: NASTRAN User Training



09/23/2010: Dust Models Paint Alien's View of Solar System

New supercomputer simulations tracking the interactions of thousands of dust grains show what the solar system might look like to alien astronomers searching for planets.



06/29/10: User Training for the FootPrints ticketing system.

The NCCS is pleased to announce the launch of its user interface to the NCCS ticketing system, FootPrints. For your convenience, the NCCS will have two user training sessions.

When: Thursday, July 8, 2010
Start time: 3:30 p.m.
End time:   4:30 p.m
Location:   Building 33, room H114.

WebEx Meeting Number: 999 841 124
WebEx Meeting Password: NCCSusg!1 
Click here to Join WebEx Meeting.
-----------------------------------

When: Friday, July 9, 2010
Start time: 3:30 p.m.
End time:   4:30 p.m
Location:   Building 28, room E210.

WebEx Meeting Number: 996 742 200
WebEx Meeting Password: NCCSusg!1
Click here to Join WebEx Meeting.


06/07/10: NCCS FYI: Reminder, Scali MPI will not be available after July 6th, 2010 on discover.

The NCCS would like to remind you that Scali MPI will not be available after July 6th, 2010 on discover.

If you need any assistance or have questions concerning transitioning to either Intel MPI or another application, please feel free to contact us at support@nccs.nasa.gov or by telephone at 301-286-9120.

Sincerely,

NCCS User Services Group



06/02/10: NASA Center for Climate Simulation Debuts

The NASA Center for Climate Simulation (NCCS) is an integrated set of supercomputing, visualization, and data interaction technologies that will enhance agency capabilities in weather and climate prediction research.



05/05/10: NCCS Hosts Forecasts for NASA's GloPac Mission

NCCS hosted chemical forecasts for the Spring 2010 Global Hawk Pacific (GloPac) environmental science mission. Developed by NASA Goddard's Global Modeling and Assimilation Office (GMAO) and collaborators, the forecasts aided flight planning and data analysis in the field.



01/22/10: Sourcemotel transition forum

You are encouraged to attend a Sourcemotel transition forum to discuss your capability requirements in preparation for the Sourcemotel transition scheduled for Wednesday, January 27, 2010. This is a good time to bring questions to the technical staff implementing the transition. We will be discussing your requirements as well as providing technical feedback to any questions you may have.

Monday, January 25th from 11 a.m. - 12:30 p.m. Building 33, H114

If you would prefer to teleconference, please use the following information:

Meeting number: 866-903-3877

Meeting passcode: 6684167



01/15/10: Seminar on Introduction to IDL

The NCCS is sponsoring a seminar on Introduction to IDL. Please join us on January 21, 2010 10 a.m. - 11:30 a.m. in Building 28, Room E210.



11/03/09: NCCS is increasing the core limits on the general_hi and general queues on discover 11/04/09 Noon.

NCCS is increasing the core limits on the general_hi and general queues on discover.

----------
general_hi
----------
  resources_max.ncpus = 3072

-------
general
-------
  resources_max.ncpus = 1024
  resources_min.ncpus = 17



09/09/09: The NCCS's Secure Unattended Proxy (SUP) service is unavailable

The NCCS's SUP service is unavailable following the movement of the Data Management Facility (DMF) to the new hardware platform on September 13, 2009. Please contact the NCCS User Services Group if you need to use SUP functionality with NCCS systems. Please note that this change does not affect SUP services provided by the NASA Advanced Supercomputing (NAS) Division at the NASA Ames Research Center.



08/24/09: NASA Expands High-End Computing System for Climate Simulation

NASA's Goddard Space Flight Center made available to scientists in August the first unit of an expanded high-end computing system that will serve as the centerpiece of a new climate simulation capability.



08/13/09: Discover SCU5 is now available on August, 13, 2009.

SCU5 is now available for use by the Discover user community through the normal queue structure. In addition, any users wishing to explicitly request SCU5 nodes can do so via the use of proc=neha (in the select portion of the PBS qsub command).



07/28/09: Transition of NCCS Userids to Agency Userids on August 19, 2009.

The NCCS is transitioning its userids to match the NASA agency Userids (AUID). This change will only affect 251 NCCS Users. If you have not been contacted, this transition will not affect you. For those being transitioned, your password and RSA SecurID token will not be affected.

The planned date for this change is August 19th.

Prior to this change, it is highly suggested that you review scripts or programs that you may have developed and ensure that they do not use a hard-coded userid.

Using the $USER, $HOME and $ARCHIVE variables is highly recommended as a best practice. For example:

mv file /home/jdoe1

would be better written as:

mv file $HOME

The NCCS will handle most aspects of the change (e.g. renaming home and archive directories, crontabs, etc.), but please be aware that userid changes are inherently complicated and unforseen issues may arise.

If you have any questions or concerns, please contact the NCCS User Services Group by e-mail at support@nccs.nasa.gov or by telephone at 301-286-9120.



07/28/09: Simulations Enable Successful Hubble Navigation Experiment

Months-long simulations at NCCS produced thousands of images that were critical for planning and testing the Hubble Relative Navigation Sensor Experiment.



07/15/09: Now Available: High-End Computing at NASA 2007 - 2008

This new report features science and engineering highlights from across NASA's mission directorates and updates about the HEC Program's facilities and services.



05/19/09: NCCS Plays Important Role in Recent Shuttle Mission

Computations performed by NASA GSFC engineers on the NCCS Discover cluster played an important role in the recent shuttle mission.

The two images seen here are of the Hubble Space Telescope (HST), the image on the right was taken during Servicing Mission 4 (SM4) during Flight Day 3 (May 13, 2009), and the image to the left was rendered many months prior to launch using the Discover cluster. The NCCS computing facilities were used to generate a suite of test images of HST in support of the Relative Navigation Sensor (RNS) Experiment being flown on-board the Shuttle Atlantis as part of STS-125, launched on May 11, 2009 at 2:01 pm EST. In particular, the images were used to develop the Goddard Natural Feature Image Recognition (GNFIR) algorithm, which estimates a 3D pose (position & orientation) from a 2D image for autonomous navigation, rendezvous and docking. The imagery was used to verify the GNFIR feature set and test the acquisition and tracking pose performance. The image on the right was taken at a range of approximately 90 meters using the far range camera of the RNS system, recorded and processed on-board by the GNFIR algorithm running on the Goddard advanced avionics system, SpaceCube.





05/15/09: Presentation on Application Performance Monitoring and Analysis on Discover

The NCCS User Services Group will be giving a presentation on the use of a variety of performance monitoring and analysis tools installed on the Discover cluster. This presentation will be from noon to 1:00 p.m. on Thursday, May 21, 2009 in Building 28, Room S216.

Abstract:

Several new tools have recently been made available on the Discover cluster to help users analyze their source code and monitor application resource usage on a per core and per node basis. This presentation will cover the use of the static Fortran source code analyzer, 'ftnchek', and annotate source code for cache access statistics with 'cachegrind'. The discussion will include the use of tools for monitoring MPI behavior and system resource usage over the duration of application run time.


WebEx Info 
Date: Thursday, May 21, 2009 
Presenter: Tyler A. Simon
Time: 12:00 pm, Eastern Daylight Time (GMT -04:00, New York) 
Meeting Number: 929 548 151 
Meeting Password: NCCSusg1! 

------------------------------------------------------- 
To start the online meeting 
------------------------------------------------------- 
Go to https://nasa.webex.com/nasa/j.php 
------------------------------------------------------- 
Teleconference information 
------------------------------------------------------- 
1-866-903-3877  

Participant: 6684167  
------------------------------------------------------- 



04/29/09: ACTION REQUIRED for all dirac Users

In a general e-mail message sent February 10, 2009, the NASA Center for Climate Simulation (NCCS) announced the impending decommissioning of the $NOBACKUP directories on dirac by May 1, 2009. That date has been extended to May 31, 2009. Dirac users, who haven't already done so, should move the research work and files in their /explore/home and /explore/nobackup directories to the Data Analysis Nodes (dali) on the discover Cluster.

Remove the files you no longer need. Move files that you do need to $NOBACKUP on discover. Only if necessary should you move them to the tape archive. Roughly 174 TB of files currently reside in the /explore/nobackup directories, so please do not automatically archive everything to tape.

If you do not have access to the Data Analysis Nodes (dali) on the discover Cluster, you will need to contact the NCCS User Services Group at 301-286-9120. If the NCCS User Services Group can be of any further assistance, please do not hesitate to contact us by e-mail at support@nccs.nasa.gov or by telephone at 301-286-9120.




04/29/09: Intel Developer Tools Training

The NCCS is hosting an Intel Developer Tools Training Class for users involved in code development on our Discover cluster. Use of Intel compilers and library tools will be explained. It will be held on May 7, from 9:00 a.m. to 4:00 p.m., Building 28, Room E210 and May 8, from 9:00 a.m. to 12:00 noon, Building 28, Room W230F. Members of the Intel Technical Staff will provide the instruction.




04/28/09: Discover Team is upgrading all internal Ethernet switches

In preparation for the upcoming SCU5 installation, the Discover Team is upgrading all internal Ethernet switches associated with the Scali nodes. This upgrade will be performed in as non-disruptive a manner as possible by employing a rolling replacement approach, limiting impact to 40 Discover nodes at a time. The switch replacement effort is expected to take place between April 29 and May 15. Please contact the NCCS User Services Group at support@nccs.nasa.gov or 301-286-9120 with any questions.




04/24/09: Completion of DataPortal Service Migration

We are pleased to announce that the thing[1-4] hosts which previously provided DataPortal Services have been decomissioned in favor of utilizing our new, much more powerful HP BladeCenter hosts. To most of our end users, this change has been made transparent by re-directing commonly utilized aliases (such as wms.gsfc.nasa.gov or map.nasa.gov) to address the new blades instead of the old thingX servers. Please use logical references to access DataPortal resources wherever possible. For example, if you previously explicitly addressed 'thing3.gsfc.nasa.gov', then you should utilize 'dataportal.nccs.nasa.gov' instead. This is a newly created round-robin alias that we will be able to preserve through future upgrades.

Below is a list of DataPortal services that were transitioned from old to new servers.

Service PreviousHost CurrentHost
SHELL ACCESS thing[1-4] dataportal.nccs.nasa.gov
ftp.nccs.nasa.gov thing3 dp[3,4]
wms.gsfc.nasa.gov thing1 dp5
map.nasa.gov thing1 dp5
portal.nccs.nasa.gov thing1 dp5
wmsdev.gsfc.nasa.gov thing2 dp6
wwwdev.map.nasa.gov thing2 dp6
wmsdev.gsfc.nasa.gov thing2 dp6
opendap.nccs.nasa.gov thing4 dp8



04/14/09: Applications and Performance Class

The NCCS hosted an Applications and Performance Class for users involved in code development on our Discover cluster. It was held on April 22, from 9:00 a.m. to 4:00 p.m., Building 28, Room S216 and April 23, from 9:00 a.m. to 12:00 noon, Building 28, Room W230F. Our instructor was Koushik K. Ghosh, Ph.D., HPC Technical Specialist, IBM Federal.




03/01/09: ACTION REQUIRED for all NCCS Users

On Wednesday, March 11, 2009 the NCCS RSA SecurID operations will be merged with the Goddard ITCD RSA SecurID operations in Code 700. This change will clear your NCCS PIN. For instructions to be followed ...




01/30/09: NCCS User Forums Scheduled for 2009

The NCCS has scheduled User Forums for 2009 on the following Tuesdays: 24 March, 23 June (Postponed), 22 September, and 8 December, 2:00-3:30 p.m. in Building 33, Room H114. Light refreshments will be served. For additional information, or to suggest an agenda item, contact the NCCS User Services Group at 301-286-9120.




01/03/09: Science Time Requests Due March 20

The Science Mission Directorate (SMD) will select from requests submitted to the e-Books online system by March 20 for 1-year allocation awards to begin on May 1. Any current projects set to expire on April 30 must have a request in e-Books to be considered for renewal. If allocation additions are needed for current projects or immediate allocations are needed for new projects contact Sally Stemwedel (301 286-5049, or Support for assistance.




09/30/08: Palm/Explore is no longer in service


After almost three years of dedicated service to the NCCS user community, the Palm/Explore cluster has been decommissioned on September 30, 2008 and will be replaced with a new system.



07/25/08: NCCS Offering Parallel MATLAB on Discover Cluster


The NASA Center for Climate Simulation (NCCS) is offering a parallel version of MATLAB to its user community for evaluation. The software runs on the Discover computing cluster and is accessed through a special queue

MATLAB is an interactive programming environment for developing algorithms, analyzing and visualizing data, and managing projects. Perhaps the preeminent software tool of its kind, more than one million scientists and engineers in 175 countries are currently using MATLAB.




07/24/08: NASA Study: Cell Processor Shows Promise for Climate Modeling


In a feasibility study funded by NASA's HEC Program, the IBM Cell Broadband Engine significantly outperformed conventional processor cores.




07/21/08: Seminar Announcement: Parallel Computing in MATLAB (8/1, 9:00 a.m. to 11:00 a.m, Bld 26/Room 205)


This session will show you how to perform parallel computing in MATLAB using either your desktop machine or a computer cluster.




07/09/08: Upgrade Operating System on the Discover cluster from SLES-9 to SLES-10.


The NCCS has upgraded the operating system on the Discover cluster from SLES-9 to SLES-10 on July 10, 2008.




05/15/08: User Forum Slides available



Slides (PDF/PPT) are available for NCCS users from Quarterly User Forum Meeting.(05/15/08)




05/05/08: Seminar Series Announcement: Fortran 2003 (5/6, 12pm, 28/E210)


Software Integration & Visualization Office (SIVO) Fortran 2003 Seminar Series




02/14/08: NASA Debuts New High-End Computing Website


NASA's High-End Computing Program has officially launched its new website, which provides information about the program's mission, accomplishments, computing systems, and support services. The website serves as a gateway to the agency's two premier supercomputing facilities.




02/07/08: Upcoming User Forum


February 14th from 1:30 to 3:30 in Building 28, room S121.

The NASA Center for Climate Simulation (NCCS) and the Software Integration and Visualization Office (SIVO) are pleased to announce an open house to showcase some of the services available to NASA's research community. The open house will be held next Thursday, the 14th of February from 1:30-3:00pm in the SIVO Scientific Visualization Studio in building 28, room S121.

The NCCS will highlight some of its compute, archive and service capabilities; all of which are available to researchers who are supported by NASA's Science Mission Directorate. The SIVO team will discuss service offerings in the areas of visualization and analysis as well as their work done in support of scientific applications and the modeling community.

The open house will also include a demonstration of the Scientific Visualization Studio as well as tours of the NCCS compute and storage facility. Team members from both organizations will be on hand to assist with any questions concerning both current and future use of these services. The beginning portion of this forum will also be made available to remote users via webcast and telecon. Details can be found at http://www.nccs.nasa.gov/user_forum.html





01/14/08: Seminar Series Announcement: Fortran 2003 SIVO is pleased to begin a new biweekly series of seminars on the new Fortran 2003 (F2003) standard beginning in January 2008. The F2003 standard is a major update to the Fortran standard it includes a wide variety of new capabilities of relevance to our community. Many of these features are now available in virtually all recent releases of Fortran compilers with others only available from the more progressive vendors. The content of this series will emphasize relevance of new features to scientific modeling and best practices for the use of these new features to ensure clean, portable software. With the exception of the first session which will be a general introduction to the new Fortran standard, each subsequent session will cover a separate subset of the new features in an informal brown-bag format. The attendees are encouraged to participate in the form of questions, discussions and recommendations. SIVO will also maintain a web-based discussion area for these topics.

The first session will be on Tuesday, January 29 at 2:00 PM in B28-E210.

Click here to see listing of classes.




10/25/07: Reset LDAP Password online

The NCCS User Services Group has developed a tool to provide users a convenient way to change their LDAP password online. You can find a link to the LDAP password changing utility from the NCCS website under Quick Links or Click here.




09/25/07: Allocation proposals are due September 26th

For all computational projects that are due to end Sept 30th or those that are new for October 1st. Click here.




09/17/07: Slides (PPT) are available

for NCCS users from Quarterly User Forum Meeting. (09/13/2007) Click here.




09/10/07: Upcoming User Forum


September 13th from 1:30 to 3:30 in Building 33, room H114.
The User Forum is a quarterly meeting designed to facilitate dialogue with the NCCS users. Click here to see agenda.




08/06/07: High-End Computing at NASA:

An Interview with Tsengdar Lee and Phil Webster. The leaders of the High-End Computing (HEC) Program and CISTO express their views on HEC.s role within NASA and the impact of IT industry trends on computational science and engineering.Read More.




05/01/07: Halem system is now being retired


After almost five years of dedicated service to NCCS community, the halem system is now being decommissioned. As you may know, the Halem system has been without maintenance for over a year. Starting May 1st, this system will no longer be supported. What this means to the user is that if the system should fail, there will be no attempt made to recover it. In addition, any disk or licenses which can be used on the new discover system will be removed without warning. The system will remain available in this state for a few additional weeks, at which point logins will be disabled and the system decommissioned.

Users are welcome to continue to use the Halem system under these conditions as long as it is up and running, just please ensure any data created is immediately moved to another location.




04/30/07: Slides (PDF/PPT) are available

for NCCS users from Quarterly User Forum Meeting. (04/27/2007) Click here.




04/20/07: Upcoming User Forum


April 26th from 1:30 to 3:30 in Building 33, room H114.

The User Forum is a quarterly meeting designed to facilitate dialogue with the NCCS users. Click here to see agenda.




03/26/07: March 30, 2007 Deadline for moving /nobackup data.

The NCCS is pleased to announce the availability of a new scratch space for the explore (Altix) environment. This new scratch space comes in the form of new nobackup filesystems (these filesystems are not backed up by the system staff). These new filesystems provide expanded space, better performance and a new pathname scheme that will help differentiate between other nobackup filesystems on the other compute environments here at the NCCS. These filesystems are available on all of the Altix systems as well as on dirac (via CXFS).

Users of the explore environment and dirac can access these scratch areas via the following path:

/explore/nobackup/

or via the environment variable:

$NOBACKUP




01/23/07: Slides (PDF/PPT) are available for NCCS users from Quarterly User Forum Meeting (1/12/2007).


Click here.




01/02/07: Slides for the Discover training class are available for NCCS users


Click here to see the slides.




12/01/06: Upcoming User Forum


January 12th from 1:30 to 3:30 in Building 33, room H114.

The User Forum is a quarterly meeting designed to facilitate dialogue with the NCCS users. Click here to see agenda.




11/07/06: NASA's High-End Computing Report for 2006 Now Available


The inaugural report from NASA's newly established High-End Computing (HEC) Program captures remarkable science and engineering accomplishments enabled by the HEC Program's shared high-end computing systems and services. (PDF-8.3MB) Please click here for more info.




11/07/06: NASA Seeks Proposals for Leadership Computing Allocations


For the second year, the NASA is seeking proposals for large-scale computing allocations to support cutting-edge, computationally intensive science and engineering of national interest under the National Leadership Computing System (NLCS) initiative. Please click here for more info.




11/03/06: NASA Science and Engineering Achievements to Be Featured


Some of NASA's most spectacular science and engineering achievements enabled by the agency's high-end computing resources will be showcased at Supercomputing 2006 (SC06), the International Conference for High-Performance Computing, Networking, Storage, and Analysis at Tampa's Convention Center, Nov. 11-17, 2006. Please click here for more info.




-PROJECTS RUNNING ON NCCS-

09/15/06: The Science Mission Directorate (SMD) is now accepting allocations for fiscal year of 2007. Submissions are due before October 4th, 2006.Please click here for more info.




09/01/06: The list of projects running on NCCS

is organized by the sponsoring mission directorate or initiative, then by project title in alphabetical order of principal investigator. Please click here for more info.




06/12/06: Weekly User Teleconference, Tuesdays at 1:30pm EST.

The purpose of the weekly User Teleconference is to discuss current issues, provide updates to the user community and to answer questions. Please click here for more info.




FirstGov logo + NASA Privacy, Security, Notices
+ Sciences and Exploration Directorate
+ CISTO
NASA Curator: Mason Chang
NCCS User Services Group (301-286-9120)
NASA Official: Dan Duffy, High-Performance
Computing Lead, GSFC Code 606.2