Academic researchers in the United States will soon have a new tool. Researchers at institutions in California and Illinois, along with several high-tech companies will put together what designers call the world's first multi-site supercomputing system. It will be built and operated with a grant announced in August from the National Science Foundation.
The Distributed Terascale Facility, as it is formally known, will tie together high-tech resources in four different locations in the United States. They are located at the University of Illinois, the Argonne National Laboratory, also in Illinois, the University of California, San Diego and the California Institute of Technology. The partnership will also work with a number of companies including IBM, Intel and Sun Microsystems.
Robert Borchers, a National Science Foundation computer expert, says there were reasons for selecting those four institutions. "They have complimentary roles," he said, "and they are interconnected by a very high-speed network."
Mr. Borchers says the new supercomputer is unprecedented in the academic community. He said, "It's the biggest aggregation of computing hardware, anywhere, available to the academic - to the unclassified, academic - community. There are bigger computers but they are in military and classified applications. It will have much higher data communication speeds and much larger data storage than anybody else. So, I guess what I'm saying is that we can compute faster, move data faster and have more of it than anything around at the moment."
Computer speed is measured in teraflops. One teraflop means that a computer can perform one trillion calculations a second. The Distributed Terascale Facility or, DTF, will perform at 11.6 teraflops. That's 16 times faster than the fastest research system now available. It will travel over an optical network at the rate of 40 billion bits per second.
Again, Robert Borchers said, "What DTF does is that it allows us to deal with huge amounts of data. All of the new experimental science facilities that are coming onboard around the world are designed to generate extremely large amounts of data which has to be archived and analyzed. And so what we've done with DTF is moved from simply what's colloquially called 'big iron' huge computers - to computer systems that can access extremely large amounts of data and move it around. Visualize it. Analyze it."
Robert Borchers says that it makes sense to use more than one location for the computer system. "The expertise to do some of these things exists in multiple locations," he said. The people in San Diego don't necessarily want to move to the cornfields of Illinois. What we're trying to do is build a facility where we can take advantage of expertise that's available in institutions, connect them together sufficiently tightly that they can work together. It's a model for doing science that I think we are going to see more of. And partly locating it at four different institutions is a bit of an experiment to see if we can overcome the hurdles associated with having a facility spread out. It's a first. We'll see."
The Distributed Terascale Facility will be used for a number of purposes. These include to support research such as storm, climate and earthquake predictions. It will also look at ways to develop more efficient combustion engines and the physical, chemical and electrical properties of materials.
Robert Borchers sees it as a prototype with international implications. "We had a meeting with a group from the UK," he said, "who are getting into an initiative they call e-science, electronic science, which has many of the same goals as the DTF. They'd like very much to participate in the project and become participants. We're seeing similar trends in the Pacific rim. My sense is that we'll see worldwide experiments."
The National Science Foundation's Distributed Terascale Facility is expected to begin operations next year and is expected to reach peak performance levels early in 2003.