The IBM Corporation is seeking to regain the distinction of building the world's fastest supercomputers. Japan's NEC Corporation now holds the distinction. But IBM is under contract to the U.S. government to build two that together would leave the NEC machine far behind in processing speed. In fact, IBM has said they would be faster than the world's top 500 existing supercomputers combined.

The feat involves technological challenges.

The supercomputer is about to become even more super. IBM says the ones it is building for the U.S. Department of Energy will have the combined ability to make 467 trillion calculations per second, 10 times more than the NEC model.

"This is a major jump beyond where we are today," says Mike Nelson is IBM's Director of Internet Technology and Strategy. "For the first time we have machines that, according to some scientists, have as much raw processing power as the human brain. These machines will be able to do much more sophisticated modeling of everything from climate to nuclear weapons to DNA molecules," he said.

Personal computers for the home or workplace have one or perhaps two microprocessing chips, the brains of the units. But the smaller of the two new IBM supercomputers computers will contain more than 12,000 chips to make 100 trillion calculations per second - twice as fast as its nearest Japanese competitor. The U.S. Department of Energy will use it to simulate the operation of the country's nuclear weapons without having to conduct underground testing.

The bigger supercomputer will have 130,000 microprocessor chips running more than three times as fast as its partner. It will simulate physical phenomena such as prediction of materials properties, the behavior of high explosives, and the interaction between the atmosphere and pollution. It will store the information equivalent of one billion books.

With so many processors, some are bound to fail. Mr. Nelson says the new systems will be able to work around such failures automatically. "One of the biggest challenges is building a system that can manage itself. So we have had to develop software that can monitor the health of each individual chip and when there's a problem, it actually can call in a backup that can take over the job. So you don't have to shut down the machine and pull out the card with the faulty chip and put in a new card," Mr. Nelson said.

Ultra-fast processing is of little use, however, if a supercomputer cannot retrieve data quickly from its memory to work with. University of Tennessee supercomputer expert Jack Dongarra says developing retrieval speeds to support quick processors is another challenge IBM faces.

"It's actually assembling and producing a network that will allow data to be communicated within the computer at high enough rates so that the processors can have the data when they are ready to do the operation. It's that movement of data that is the handicap today in terms of producing high performance machines," Mr. Dongarra said.

IBM is to deliver its new supercomputers by 2004. Whether they remain the fastest for long remains to be seen. NEC and the U.S. companies Hewlett-Packard and Cray are all competing in this realm. IBM's Mike Nelson says one advance that will speed processing is optical technology - communicating signals with glass fibers rather than wire.

"That's really the next generation of supercomputing. It means using optical switching and optical networks to link together all the components of the supercomputer. Moving to optical switching could be a factor of 10 or even 100 improvement in the throughput through the system," Mr. Nelson said.

The market for supercomputers is tiny compared to that for personal computers. So why does a company like IBM make the effort? "These companies use this as a research vehicle to help further their technology," he said.

"This is a way to build better products, which do in fact trickle down to the things that we use every day," Mr. Dongarra said.