NCCS Staff Spotlight on Mike Little:
Time Traveling in NASA Supercomputing


To better understand where all our supercomputers came from, we interviewed someone who has a long view of NASA and the proudly geeky world of supercomputing, NCCS staff member Mike Little.

Mike Little, using an imaginary time machine to surf the space-time manifold, seems to travel back to 1964 for a visit to this IBM System/360 computer center. Background photo of the IBM System/360 property of, and courtesy of, the International Business Machines Corporation. Foreground photo of Mike Little provided by NASA.

Hometown: I was born and raised in Kansas City, Missouri, but I really grew up in the Navy. I’ve spent most of my adult life in the Mid-Atlantic, living mostly in greater Washington, D.C. and Williamsburg, Virginia. I loved living in Colorado for a few years, but who can afford it any longer?

Career Path to NASA: I wanted to be a physicist since I was 9 years old. I paid for my undergraduate degree in physics at the University of Missouri by joining the college-based, commissioned officer training program of the United States Navy (Navy ROTC). After I was qualified to supervise the operation and maintenance of naval nuclear propulsion systems, I spent four years of active duty on submarines as a Navy officer in that role. My career in the U.S. Navy taught me an appreciation for what it takes to operate highly complex systems with a diverse group of people with varying levels of training and motivation. I also learned to be personally responsible in the context of living and working in a nuclear submarine in the 1970s in a very dangerous time—the Cold War.

After two semesters of graduate school in physics, I discovered that the job market for physicists was extremely limited. I spent the next ten years working with a series of contractors for the Department of Defense (DOD). I designed and built communications and other systems for the DOD, working on the formulation of systems acquisition management programs for the U.S. Navy, Marine Corps, and Air Force. The last job I had as a DOD contractor was in Colorado Springs, conducting reliability engineering studies of the Air Force’s Consolidated Space Operations Center (CSOC), which included the satellite and ground components of a computer-controlled communications system. [Editor’s note: The new U.S. Space Force will soon build a state-of-the-art Consolidated Space Operations Facility (CSOF) at Schriever Air Force Base in Colorado Springs to expand and upgrade CSOC facilities.]

After a change in contract structure re-organized the work force, I found a job back in Washington, D.C. for the US Census Bureau. In the 1990 Census Promotion Office (CPO), I taught dozens of journalists how to use computers, introducing them to e-mail and a calendaring system. At the CPO, I developed a far better understanding of the value of a diverse workforce and its ability to overcome obstacles to create success.

In addition to enhanced promotion and outreach innovations, the 1990 Census included several technical innovations: it was the first time that census data was integrated with the Topologically Integrated Geographic Encoding and Referencing (TIGER) System, which was co-developed by the U.S. Geological Survey and the Census Bureau to support and improve the Decennial Census. TIGER data was used to geographically code (geocode) addresses into appropriate census geographic areas. This enabled the production of database-driven maps with geographic features such as streets and attributes such as location data, linked to census results. Also for the first time, selected census data was available online through two dial-up service providers, Dialog and CompuServe.

Involvement with high-end computing at NASA: After the 1990 census was conducted and the CPO office disbanded, I was selected for a position at NASA Headquarters in Washington, D.C., known at the time as “Code R.” In the Office of Aeronautics, Exploration and Space Technology, with some colleagues, I connected Code R to the internet for the first time. We were not assigned to do that: nobody had said we couldn’t do it, so our team just did it. I managed the Code R computing network in the Institutional Supercomputing Program. I also managed the computer science part of the Critical Technologies Research Program, which funded institutes at the NASA Ames Research Center, the NASA John H. Glenn Research Center at Lewis Field, and the NASA Langley Research Center. Transferring to NASA Langley Research Center, I held a series of jobs in the Office of the Chief Information Officer and the Science Directorate related to computer support for the Center.

Recent Role at NASA: I was the lead of the Advanced Information Systems Technology’s (AIST) Program from 2015 to 2020. The AIST funds the development of state-of-the-art technologies that enable Earth science observation capabilities through distributed sensing; optimization of science missions; and agile science investigations through advanced tools.

I managed the AIST-managed cloud environment as an experiment to prove that commercial cloud computing could be useful to NASA’s science community. We dealt with computer security and financial issues to identify ways to move scientific computing onto the cloud. Once we reached the Federal Information Security Modernization Act’s (FISMA) Low Impact compliance level and demonstrated that this cloud environment could actually be made to work, Computational and Information Sciences and Technology Office (CISTO) Chief Dan Duffy agreed (at a burger joint over lunch, drawing on a napkin) to take over the project and turn it into an operational capability at NCCS. In 2020, I turned the AIST program over to Jacqueline Le Moigne, who is transforming the program to address emerging technologies.

Current Role at NASA: My current role involves helping NASA develop a strategic direction for high-end computing, which ultimately helps meet needs of the agency’s science and engineering users. I’m largely trying to support strategic planning for NASA’s High-End Computing (HEC) Program at both locations—NASA Goddard and NASA Ames—although helping the latter has been harder without being able to travel there during the global COVID-19 pandemic.

Upon coming to work at CISTO in 2020, working with Tsengdar Lee, Dan Duffy, and Piyush Mehrotra, I undertook the HEC Users Needs Assessment. This is a periodic, NASA-wide, focused process to help NASA understand how scientists and engineers are using HEC, its impact, and how user needs are evolving over the next five years. Previously, I helped with the 2013 event and past assessments.

Due to COVID-19, the 2020 Users Needs Assessment took place virtually over several weeks with structured panels. The information gathered at the meeting was later consolidated into a major deliverable, used primarily to understand and decide how NASA’s strategic direction in HEC will be able to support its research and engineering programs, which investments are needed, and which services the agency needs in order to move into the future. I continue to support HEC management from within CISTO, and that involves ongoing panels and discussions to determine how to best to meet the needs identified in the assessment.

How can NASA better prepare for the future of HEC? We need to recognize the need to migrate our analysis tools and models onto new platform architectures and to make strategic investments that will make available the right hardware and skills among the support staff to make that migration occur efficiently.

In NASA’s supercomputing program of the early 1990s, the budget, adjusted for inflation, was triple that of recent budgets and the number of people working in the program was about a quadruple of what it is now. With that budget and that number of skilled people, NASA was inventing things that helped revolutionize the 1990s computing world.

Is there anything else that you would like to mention about high-end computing? I’ve been privileged to witness the evolution of high-end computing at NASA from 1990 to the present year, 2021, and to know many of the original pioneers in high performance computing and communications. From the beginning of NASA, we have needed the most advanced scientific and engineering computing capability possible to engineer space missions. NASA’s high performance computing community has made regular and significant contributions to the evolution of scientific computers. From the IBM S/360 through Cray, Silicon Graphics, CDC, and the Beowulf cluster and OpenStack, NASA has consistently helped to improve computing environments so they would be useful to us.

For example, the first big mainframe computer in 1964—the IBM System/360—was used for the Mercury, Gemini, and Apollo Programs. But the S/360 initially couldn’t run for more than about an hour at a time. Pioneering space architect Wernher Von Braun hired a newly-minted Ph.D. from the University of Alabama, John C. Lynn, to find a way to make it useful in data reduction. Lynn and his team rewrote the operating system (OS) to take out the bugs and make it simpler. NASA then gave that much-improved OS to IBM.

In another example of how NASA’s needs drive our contributions to supercomputing, NASA acquired its first Cray supercomputer, but found it too hard to migrate codes from its existing Unix systems. The NASA Advanced Supercomputing (NAS) division at NASA Ames implemented Unix on their Cray computer and gave that code to Cray. Similar contributions were made to Silicon Graphics.

The Beowulf computer cluster was actually invented in 1994 at NASA Goddard by Thomas Sterling and Donald Becker. They created the Beowulf clustering software to connect several inexpensive personal computers to solve complex math problems typically reserved for classic supercomputers. For this groundbreaking work, Sterling and Becker received the Gordon Bell Prize in 1997, also called the Nobel Prize of supercomputing.

Mike Little, seeming to travel across the space-time continuum back to 1968, stands in front of the IBM System/360 Model 91 control panel at NASA Goddard Space Flight Center (GSFC). The 360 was the most powerful computer of its time, capable of performing 16.6 x 106 instructions per second. NASA images.

People who have influenced me: Almost everyone I’ve ever worked with in my career has influenced me. Perhaps the most important is my wife, Devon Rawson, who also taught me how to understand cost, schedule and performance management in a project management sense. Before NASA, four notable mentors: R. Clark Caldwell, Kathy Voth, Larry Albert, and Steve Fowler.

At NASA, the main ones are: Pat Dunnington, Cathy Mangum, Lee Holcomb, Dave Cooper, Chris Scolese, Bob Whitehead, Dave Lavery, Frank Allario, Kristin Hessenius, Bob Pearce, Pamela Richardson, Mike Freilich, Tsengdar Lee, George Komar, Dan Duffy, Pam Millar, Jacqueline LeMoigne, Marge Cole, Jack Dangermond, Mike Seablom, Sachi Babu, Dave Young, Rosemary Baize, Bruce Weilicki, Rupak Biswas, Patrick R. Murphy, Chip Trepte, Dave Alfano, Tom Soderstrom, Dan Crichton, Hook Hua, Larry James, Joe Bredekamp, Jeanne Holm, Jim McCabe, Wes Harris, Woodrow Whitlow, Bruce Barkstrom, Martha Maiden, Piyush Mehrotra, Myra Bambacus, Karen Petraska, Brandi Quam, and a whole lot more.

Challenges: Not even close to being the smartest guy in the room makes it hard to keep up.

Inspiration: I like solving problems. Most of the problems I like are harder than a single individual can solve. They require a group of people working together. So, I like working with smart people who are undeterred by hard problems.

[Editor’s note: Mike Little is a "polymath" by several definitions: https://www.vocabulary.com/dictionary/polymath]

Related Links


Sean Keefe, NASA Goddard Space Flight Center