It has been quite a while since the last post. We have been busy testing and configuring the server platform and doing various groundwork for the project. But there has been good progress. We have a new developer starting soon and an operational server environment.
Last time I wrote that we got Ubuntu to boot from SAN. It was perhaps too early for celebration. Ubuntu did boot, yes. But without the multipath setup and thus without failover functionality. Also, only one of the blades would be able to write to the disk array even each did recognize it and mapped volumes.
There was absolutely no clear information on how to set up multipath. We contacted the IBM support, talk in the forums, ask around people working with Linux, read various manuals and tried several distributions with better built-in support (e.g. RHEL, Centos). Some sources said that there was no need for any additional configuration while some said that a device specific instructions had to be configured. We tried them all. In the end, the solution was simple. By default the disk array controllers (which were quite old) used a deprecated driver (RDAC) which refused to work correctly. There was a firmware update for the controllers but we couldn’t upgrade it, because there were other blades running in the same environment which couldn’t be upgraded at that time. However, there was a small switch deep in the configuration software which managed the driver to be used with the specific volume. We changed if from default (RDAC) to Linux (MPIO). We also updated the volume mapping from the default LUN 0 to LUN 1, because the default value is known to cause boot issues.
There was no need for a manual multipath configuration. Any current major Linux installation can recognize the SAN volumes and create the correct configuration on the fly and enable multipath modules for the kernel. We tested with Centos 6.3 64-bit and Ubuntu Server 12.04 LTS 64-bit. Centos and RHEL installation is more advanced. It provides a graphical menu which displays a multipath device with device nodes (mapped volumes). The installation after this step was like any other. Nice and easy. Ubuntu also discovered the mapped volumes, but as two drives. We tried partitioning and installing it in the first one. It seems that the issue is only during install time and Ubuntu will see the changes made to the first volume as well in the second one. After the install was complete the multipath module was loaded during the boot and Ubuntu recognized the virtual multipath device instead of two attached volumes.
To get multipath configured and enabled automatically during the installation we needed to have both connections online and mapped for the volume. The first advice in the manual and blogs was to use only one. However, it is a great advantage not having to configure the setup by hand. Always try first with a complete SAN environment and only then try manually with a reduced configuration, if there is a need.
I also promised some benchmark results. I used hdparm utility to get an approximate overview. As a reference device, I had my high-end Dell laptop running virtualized Ubuntu Desktop 12.10.
sudo hdparm -Tt /dev/sda
1) Diskless blades: 4GB fibre channel, IBM DS4500 disk array.
Timing cached reads: 19554 MB in 2.00 seconds = 9787.68 MB/sec
Timing buffered disk reads: 572 MB in 3.01 seconds = 190.13 MB/sec
2) Laptop: VMware Workstation 9, dedicated 7200rpm disk for virtual machines.
Timing cached reads: 16486 MB in 2.00 seconds = 8247.70 MB/sec
Timing buffered disk reads: 128 MB in 3.02 seconds = 42.36 MB/sec
You can imagine what the speed advantage is with modern SAN hardware and 32GB connections, if this is what we get with years old technology. For development needs, we have to deal with the old storage hardware at hand.
I will continue later with writing about setting up our virtual environment and building a cloud platform for the archive software. We will also start the software development work in the beginning of 2013. Stay tuned.