It’s been no secret for a very long time that supercomputers suck down a great deal of energy when doing their lighting fast computations. In fact, one of the most energy-consuming types of computations are those programs used to calculate and model global climate change.
But a recent breakthrough in energy-saving supercomputer technology will mean that this intense number crunching will be become feasible very soon. A company named Tensilica has teamed up with the Department of Energy (DOE) Lawrence Berkeley National Laboratory to design new supercomputing architecture using small processor cores that will be able to handle 100 to 1000 times the throughput of traditional high-end computers.
The drawback for supercomputers has traditionally been that they use an extreme amount of energy, put off a high degree of heat and require very complex physical installations that their cost has been prohibitive.
According to Horst Simon, Associate Laboratory Director about the breakthrough, “Such processors, by their nature, must deliver maximum performance while consuming minimal power – exactly the challenge facing the high performance computing community. One of the most compute-intensive applications is modeling global climate change, a critical research application and the perfect pilot application for energy-efficient computing optimization.”
By making supercomputers energy efficient and thus less costly, we will receive many answers to pressing environmental concerns much more quickly. And the faster we get a handle on the issues surrounding global warming the more intelligently, we can decide as a population how to combat the problem in the most effective manner.