Read it in full at http://www.pcauthority.com.au/Feature/3 ... puter.aspx
The term “supercomputer” is a loose one. There’s no official definition, so there’s nothing preventing you from applying the term to your desktop PC, laptop or digital watch. Broadly, though, it refers to a computer that’s much more powerful than the typical hardware of its period.
The first supercomputer is often said to be the CDC 6600, designed in the early 1960s by Seymour Cray (whose name would become synonymous with supercomputing). It could perform calculations at a rate of around one megaflops – that is, one million floating-point arithmetical operations per second; roughly five times the performance of a contemporary mainframe such as the IBM 7090.
Today, the term might refer to a system such as the Fujitsu K computer, capable of more than ten petaflops – a staggering ten-billionfold increase over the original Cray. The two aren’t perfectly comparable since the two systems performed quite different tasks, but it’s clear we’re dealing with vast amounts of power.
Supercomputing applications
It might not be immediately obvious what anybody might need with such incredible computational power, but there are a number of real-world tasks that will devour all the processing resources you can throw at them. In scientific research, supercomputers can be used to test fluid dynamic or aerodynamic models without the need to build expensive prototypes. At CERN, supercomputers perform simulated subatomic experiments. Seismologists use supercomputer resources to model the effects of earthquakes, and meteorologists can rapidly analyse large quantities of sensor data to predict how weather systems will develop.
Supercomputing is at the forefront of new technologies, too. Creating a computer interface that responds to natural language, for example, is an extremely challenging task, owing to the immense variety of sounds, situations and nuances that must be understood; the more horsepower that can be thrown at the problem, the better it will be.