- http://www.linux-watch.com/news/NS7848919863.html - "The fastest computers are Linux Computers".
These top computers are amazing, they are a testament to the "scale-out" architectural approach of repetitive computing horsepower. Take a look at the number 1 system in the list. They describe the system on the Lawrence Livermore National Laboratory's web site at https://asc.llnl.gov/computing_resources/bluegenel/. According to the Top 500 web site, this "system" has got 212,992 processors and 73,728 GB memory. (I wonder what the power/watts/thermal measurements are for systems like this - couldn't find any energy star ratings on the BlueGene web site.)
- Poking around some more, it turns out that the memory is referred to as "tebibytes". I probably should've known this, but the tebi prefix is short for "tera binary byte" and is intended for the 2 to the nth power numbers. So the tebibyte is 1024 to the 4th (or 2 to the 40th), where-as the more familiar terabyte is 1000 to the 4th (or 10 to the 12th). There's a nice table out on Wikipedia which had a good concise way of looking at things. It certainly is more precise - we often have clarification discussions about the difference between 1024 vs 1000 based numbers.
These Top 500 super computer systems are a world in and of themselves. They really represent the top of the stack for HPC workloads, and have fairly unique configuration challenges which in many cases dive into the "research" world and some amazing latency, shared file, memory accessibility, and CPU interconnect technologies. For Linux customers, these technologies are in some cases bleeding edge technologies which over time are product'ized and rolled into the commercially supported Distros, and in other cases are already shipping in today's customer available distros.
A good example of technology which is pervasive and mature for commercial, research, and academic use is the OpenMPI project. Over the last couple of months we've started looking more into OpenMPI and related MPI products and have found that OpenMPI is very competitive. The Open MPI organization (at http://www.open-mpi.org/ ) is very active and keeps the MPI implementation at the leading edge across a number of offerings. The feature set of Open MPI v1.2.4 is impressive and provides good flexibility across networks, interconnects, and system implementations. Very impressive. Our work on small clusters is based on OpenMPI and allows us to focus on other system performance issues and concerns.
It'll be interesting to see what we find over the coming months. It'll be fun to start learning more about these large system scaled-out configurations and see what we can apply to real-life customers today.
No comments:
Post a Comment