Hardware Specifications

At the heart of the cluster are the two servers bc247 and iserver2. The servers are identical Acer Altos R720′s with the following specifications:

Processor 2 x Intel® Xeon® Processor 5050 (4M Cache, 3.00 GHz, 667 MHz FSB)
Memory 3 x 2GB ECC Registered DDR2 667 MHz DIMM
Disks 2 x 300GB DISKS SATA1
4 x 300GB DISKS SATA5
Networking 2 x Intel 1000BaseT (Gigabit Ethernet)
Operating System

CentOS Linux release 6.2
Kernel  2.6.32-220.7.1.el6.x86_64

 

Attached to these servers are the compute nodes. The compute nodes are the computers that are used by students during computer lab opening times. An example of a typical compute node is below:

node

beonode21

Due to rolling upgrades of the computer facilities, the compute nodes are not homogenous. Nodes in different computer labs have differing hardware specifications, however nodes residing in the same lab have the same hardware specifications. The specifications of nodes in G11, G16 and G211 are as below:

Number 25 in room G11, 25 in room G16, 8 in room G211. Total 58
Processor Intel® Core™ i7-860 Processor (8M Cache, 2.80 GHz)
Memory 4 x 2GB Unregistered DDR3 1333 MHz DIMM
Disks NFS mounted root directory on iserver2
Locally mounted SATA drive
Networking 3Com 3c2000-T  1000BaseT (Gigabit Ethernet)
Operating System

CentOS Linux release 6.2
Kernel  2.6.35.13beowulf-slave-node-v11 (custom compile)

The specifications of nodes in G15 are as below:

Number 25 in room G15
Processor Intel® Core™ i7-2600K Processor (8M Cache, up to 3.80 GHz)
Memory 4 x 4GB Unregistered DDR3 1333 MHz DIMM
Disks NFS mounted root directory on iserver2
Locally mounted SATA drive
Networking 3Com 3c2000-T  1000BaseT (Gigabit Ethernet)
Operating System

CentOS Linux release 6.2
Kernel  2.6.35.13beowulf-slave-node-v11 (custom compile)

Although the hardware differs between nodes, the system architecture of all the nodes is the same. All nodes run an x86_64bit CentOS kernel. The kernel is supplied without an accompanying initrd filesystem via TFTP. Different hardware setups require different drivers to run, and these are all compiled into a single kernel. By including drivers that are not needed for certain nodes, the size of the kernel is larger than the minimum possible. This increases network traffic during boot, however the increase in filesize is insignificant. The simplicity of only having to maintain a single kernel far outweighs the potential performance benefits.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: