Terabyte Fileserver

Fast forward five years. I had built another fileserver called dildo, with 33 TiB storage in a RAID6 mdadm software array. I was fed up with the crap performance of the Areca controller. The Athlon based, 20 disk fileserver works well apart from the occasional CPU freeze at startup. But it filled up rapidly and when a good friend gave me his uncooperative small business server I decided to put it to use as my new file server.

The DELL Poweredge T410 gave many problems. No expensive consultant seemed to be able to fix the Microsoft Windows Small Business Server 2008 install so it eventually was replaced. I noticed the BIOS was really old as it would not boot from USB. Funny enough, the flash installer ran fine from a FreeDOS USB stick. Then I could install CentOS7 on it.

The drive cage accommodated only six disks. To get a decent amount of storage I set my sights on the new Seagate Archive disks with a whopping 8 TB space. Using the CD cage I could house three disks in a Icydock bay. Four more bays created space for twelve more disks, making a total of 15, and 120 TB raw storage, a threefold increase over the current server dildo.

Getting juice into the disks posed a problem. The drive bays want Molex power plugs and the DELL has proprietary plugs. No problem, I took the old backplane for the drive cage and cut out the sockets with some surrounding PC board. Then I just soldered old Molex wire harnesses to it. Problem solved.

Now the four bays had to be mounted in the T410 case. I used a hacksaw to cut out a window in the front panel after removing the drive cage. This was not easy because it was all riveted together. FInally got it in place and replaced the mother board back in the case. The fans from the bays had to be removed to make the shroud fit that directs the air flow over the CPU. This is a disadvantage as the drives now run hotter than their cousins in the bay in the CD cage.

In the meantime all drives were delivered and put in the bays. Initially I wanted to use port multipliers but with all the PCIe slots available this made no sense. I changed them for 4-port SATA controllers offering 6 Gb throughput, a tenfold improvement over the port multipliers. This should speed up the huge array significantly.

Time to create the array. Having experience with mdadm helped quite a bit so creating the array was easy:
mdadm -C /dev/md0 -n 15 -l 6 /dev/sd[b-p]
Once it started creating the array I created the config file:
mdadm --detail --scan >> /etc/mdadm.conf
Initializing the array took three days. Then I could do:
mdadm --assemble --scan
to start the array.

That also posed a problem. With Red Hat's introduction of RHEL7, many things have changed. The often used /etc/rc.local doesn't work anymore. You now must create a service. Took me lots of headscratching to get that working. There is now a script that gets started as a service. Very annoying. But also very fast. The script also mounts the XFS filesystem that I created on the array.

So I thought I was done and could start copying files onto the new server. Then I discovered that the DELL's PSU did not like 15 disks spinning up at once. It would start up then shut down again. So there is a hdparm command to set the PUIS (Power Up In Standby) flag to fix that. You have to use the parameter --yes-i-know-what-im-doing to set it. Whether this caused my disk to fail or it was just dumb coincidence, a disk failed and caused the controller to refuse to boot the server. Once I popped in my spare disk all was fine.

But I still needed that staggered spinup. I decided to use a hardware solution and delay the powering up of the top bay. This circuit causes a seven second delay allowing the PSU to accommodate the rush-in current. The SATA controller doesn't mind as it gets configured only much later during startup. Peak power is 350 W, running it's around 200 W.


































© 2015 Zappy TV