› The TinyHPC  Cluster and Operating System

Latest Pictures of the cluster have now been posted - Click here to view them

You are on the TinyHPC project page. Here you will find details on the beowulf cluster that I am building. The most helpful item should be the PDF, which is the paper I wrote up for the project. It not only describes the cluster itself, but also presents details that can allow you to replicate my efforts yourself. Some details of the paper, especially those of the server, are a bit old).

I noticed that lot of people from the PelicanHPC forum have been reading the paper, and for those, I will have the final draft ready in a few more days. Until then, read Section 3 only, which is most relevant to setting up the cluster through an Ubuntu installation. Also, Michael Creel from the forum suggested me to script the setup of the cluster in Ubuntu Server, and then package the scripts into a .deb. I will do that as soon as I resolve an issue with the /etc/hosts file in my particular setup - I think that each node must have its own root partition, unlike the Debian Live situation in PelicanHPC, where the nodes got their filesystem as squashfs from the /live/image directory of the server.

Click here to read the current draft of the paper


Goals for now:

1. Increase the number of nodes - obviously. (ADDED 10 MORE NODES)  

2. Get Gigabit or Infiniband - again, once it gets within an affordable domain, I surely want to juice that last GFLOP by improving the cluster networking.

3. (DONE) Clean things up physically - its a mess with motherboards and parts jumbled up - maybe assemble a frame to hold the cluster parts, get a KVM switch, all that. 

4. (DONE) Get NAST3DGP up and running, probably with some simple simulations before September 2. 

5. Get Tecplot licensed and installed on the cluster.


Hereis a picture of the (close to) latest cluster - I have removed the older pictures from this page, but they can still be found here along with others:

Here is the abstract: TinyHPC is a Class I beowulf high performance computing cluster of the shared memory architecture. It uses MPI for inter-node communication, and its primary intention is running fluid dynamics programs and my other projects that might need its power. It has a theoritical peak of 15.6 GFLOPS

Progress (taken from the Labs page, as you may have already guessed): The first day I got a cluster running (ParallelKnoppix) is June 13. 2008. This version was composed of a Gateway e3400 Desktop, and two Toshiba Satellite laptops connected by D-Link DI-524. The next version connected by an Intel Express 330T hub and having four nodes running PelicanHPC (one Satellite laptop, and three desktops being from Gateway, Tangent, and eMachines) was created on July 08. 2008. A similar version with six nodes (the other Toshiba laptop, and another desktop) was finished on July 10. 2008. Up till here, these clusters did not do well on the HPL due to their hetrogeneous nature. As such, I decided to build a homogeneous cluster. I obtained five Intel 815EGEW motherboads and five Intel Celeron FCPGA2 1.3 GHz processors, as well as 512 Mb RAM sticks, power supplies, and network cards. The next version was ready by July 29. 2008. This version is now the TinyHPC project, precisely what you are seeing right now.