09
Apr
2018

The 2016 NAS: two years in!

Time really does fly… Its been two years now since I set up the 2016 NAS.

The NAS is in its new home in the equipment rack (TV stand edition) – and fortunately for me and perhaps unfortunately for you I don’t have much to report.

The hard drives have been absolutely fantastic. They haven’t even batted an eyelid – still serving files just as well as the day I got them. The underlying OS, Rockstor, has been okay. I’ve had to redo some configs with regards to scheduled tasks but other than that its been grand!

I’ve been scrubbing the volume once a month and so far there hasn’t been any corruption. Maybe the usage of ECC RAM has helped – but there are no signs of any of the drives dying just yet.

The volume itself is getting a bit full now so I am tempted to expand the storage to 12TB by throwing in another 4TB drive and balancing the array. The prices of hard drives have come down slightly in price for 4TB drives but now 8TB drives are better value.

Its tempting to replace them all with 8TB drives to double the space but there isn’t anything wrong with the 4TBs. I’ve decided I don’t need so much on the NAS, so I have been removing/compressing old data and recovering space that way.

The snapshot functionality of btrfs has been a lifesaver too – I was running a Minecraft server and the save file got corrupted. Usually you’re stuffed, but I was able to roll back a snapshot from a few hours before and it was recovered!

Just a short one – I’m looking out for 4TB Seagate NAS drives on sale… Maybe there will be an upgrade post coming!

05
Jul
2016

Rockstor: an introduction

Right, time to get into the OS of the NAS: Rockstor! If you haven’t seen my series on the 2016 NAS, be sure to check that out first!

When I was deciding which OS to run, I had a few objectives to achieve:

  • Linux based for familiarity
  • A checksumming filesystem (like ZFS or BTRFS)
  • Stability
  • Simple UI to work with
  • Good performance
  • Good hardware compatibility
  • Drive health monitoring and reporting

Screenshot from 2016-07-05 19:44:51

I didn’t want to use FreeNAS with ZFS due to the huge memory requirements (I only have 4GB of RAM in this box, expandable up to 8) where ZFS requires 4GB plus 1GB per 1TB of hard drive storage. BTRFS also has checksumming capabilities, but the large amount of RAM is not required.

As for Linux, Rockstor runs on CentOS 7, which is similar to Ubuntu in many ways (except package handling, which is not hard to pick up) – this makes it simple for tinkerers and tweakers to play around in the underlying OS! This also goes towards stability, since this is CentOS. You know, the one that like 20% of websites run on? Enough said.

So, lets talk about the UI! The above screenshot shows the main page, which shows you a general overview of the current state of the system. You get all the basics – CPU, memory, network, disk utilisation and disk space usage. These are rolling graphs that show the realtime activity of each component – very useful to have if you are experiencing slowdowns!

Disk management:

Screenshot from 2016-07-05 20:06:13

Disk management is very simple. You can see your drives, their capacities, and plenty of other information such as SMART data. You can also configure the spin-down policies of the drives – I have mine set to an hour to save some power (at night time for instance).

Disk pools:

Screenshot from 2016-07-05 20:09:11

Here, you manage your drive pools. A pool is a storage space consisting of one or more drives. You can see the default rockstor installation on the SSD, and also the 8TB RAID5 array:

Screenshot from 2016-07-05 20:11:21

You can see the drives that belong to a pool, and you have access to other tools such as scrubbing and balancing (more on this in later posts).

Shares:

Screenshot from 2016-07-05 20:37:55

This is where you manage your shares – basically subvolumes in the drive pools that you can separate out, assign to different users and assign different snapshot policies (more on this in later articles!).

There is a lot to cover in this OS, so I will have to spread the details out over a series of posts. I’ll see you in the next article!

04
Jun
2016

2016 NAS: Power Consumption

So, the NAS is complete. I purposely bought low-power hardware – the 35W i3 chip, 5900 RPM NAS drives, 80+ Bronze PSU, etc. It’s time to have a look at what we accomplished.

I got an el-cheapo power meter from Maplins. It claims a 2.5% accuracy rating, which is fine for basic testing. I’m not about to fork out hundreds to see the difference of a watt or two!

So, I booted the NAS, and left it idle for about an hour and looked at the reading:

37W at idle – not bad at all! At this usage, it looks like it will use ~320 kWh per year – so you can calculate the running cost based on your electricity price. For the average UK electricity price, that’s about £34 a year – or £2.84 a month (assuming a fixed tariff).

I used the RockStor GUI to spin the drives down to see what the base system used:

11W is being used by the hard disks – at just under 4W/disk for the Seagate 4TB NAS drives. Not bad at all!

I then loaded up a Plex transcode (to sync to my phone that would take a while) and this is what I saw:

That is 65W. This is the highest value I observed, and is very good for a NAS that can churn out multiple 1080p video streams to several devices at once.

So that’s the hardware side of things out of the way! I should really get a UPS for this system since I plan on storing a lot of data on it, and I will probably grab a fourth drive to get the best value out of the storage space I have. Up next I will be talking about the OS I picked to use on this NAS: RockStor!

03
Jun
2016

2016 NAS: Seagate 4TB NAS Drives

So, now onto the most important part of this build. The hard disks. I had a conundrum: do I get the WD 4TB Red drives or do I go for the Seagate NAS drives?

I scoured Reviews-land, searching far and wide for a justifiable reason to go for the WD Reds over the Seagate NAS drives. The price difference was about £30 per drive, so with three drives that is a fair bit of margin. My past experience also pointed to Seagate drives for RAID – the one WD drive I’ve had has corrupted photos in the past. I will be using BTRFS – so I won’t have to worry so much about bit-rot.

Aaaaaaaaaaand the Seagate drives went on sale. So – here we are:

My, are these drives absolute beasts. They’re the heaviest hard disks I’ve ever had. And I’ve had a lot of hard disks.

Time for some performance numbers, eh? Let’s do some classic hdparm –tT /dev/sdx commands and see what we get:

hdparm -tT /dev/sd[abcd]

/dev/sda:
Timing cached reads: 14610 MB in 2.00 seconds = 7311.68 MB/sec
Timing buffered disk reads: 472 MB in 3.00 seconds = 157.30 MB/sec

/dev/sdb:
Timing cached reads: 14408 MB in 2.00 seconds = 7210.30 MB/sec
Timing buffered disk reads: 488 MB in 3.00 seconds = 162.54 MB/sec

/dev/sdc:
Timing cached reads: 19482 MB in 2.00 seconds = 9753.21 MB/sec
Timing buffered disk reads: 498 MB in 3.01 seconds = 165.59 MB/sec

/dev/sdd:
Timing cached reads: 15366 MB in 2.00 seconds = 7690.23 MB/sec
Timing buffered disk reads: 654 MB in 3.00 seconds = 217.95 MB/sec

So, sda, sdb and sdc are the three hard disks. sdd is the boot SSD (plugged into a SATA-2 port). We are hitting about ~160 MB/s on the hard disks. Also, the fastest hard drives in terms of sequential read speeds I’ve ever had.

These Seagate drives run at 5900 RPM, which is slower than the standard 7200 RPM you see on most consumer drives, and way less than the 10k or 15k RPM you see on server drives. The lower spindle speed mainly impacts access time, since the drive has to wait longer for the data to reach the read head. To test for access time, I like to use seeker.

Here are the results:

./seeker /dev/sda

Results: 79 seeks/second, 12.64 ms random access time

./seeker /dev/sdb

Results: 76 seeks/second, 13.01 ms random access time

./seeker /dev/sdc

Results: 78 seeks/second, 12.72 ms random access time

./seeker/dev/sdd

Results: 5520 seeks/second, 0.18 ms random access time

So that’s an average of ~78 seeks/second, or 12.79 ms of random access time. In comparison, the SSD is obviously laughing, and my consumer Seagate 500GB HDD’s get 64 seeks/second (15.6 ms). So these slower RPM NAS drives are doing better in terms of access time than the 7200 RPM consumer drives. Nice.

Okay, I know what you are saying. What about real-world performance? How fast is BTRFS RAID-5? Well, I can dd a file and get 250MB/s plus. That’s plenty for gigabit ethernet. What about random access? Well, I can test that using sysbench! I detail the settings I’ve used in this post. The results are below:

3x 4TB Seagate NAS BTRFS RAID-5: 394.62Kb/sec

3x 500GB Seagate Barracuda 7200.12 mdadm RAID-5: 762.55Kb/sec

NAS SSD Kingston SSDNow 300 120GB: 34.728Mb/sec

So yeah, the NAS drives in real world cases with BTRFS are not as fast as a mdadm array. I believe the checksumming that BTRFS could be playing a part in the access time. I would like to note that this is not representative of what I will be asking of the NAS – it will mainly service Plex movies (large files). It has more than enough performance for that task! If I need a fast space in the NAS, I have the SSD there anyways!

In terms of acoustics, the drives are fairly quiet. Not silent, but you can’t really hear them seeking very much. It is quite cool hearing them spin up, since they take a fair bit of time to reach full speed. In terms of temperatures, they don’t get warm at all. 32-33 degC is the highest I’ve seen from the middle drive in the stack, and they don’t seem to fluctuate that much.

All in all, these are awesome drives for the money and space that you get. I have 8TB of usable storage space that I know will take me a good while to fill! Tomorrow’s post will talk more about power consumption of the drives, so stay tuned for that! Ta-Ta for now!

02
Jun
2016

2016 NAS: The Logic Case SC-N400 (Norco ITX S4)

Next, it is time to look at what ties the 2016 NAS together. The case.

This is the Logic Case SC-N400. It’s a clone of the Norco ITX-S4. As far as I can tell, the two cases are identical. Anyway! On with the tour!

Above, I have opened the door (which can be locked) and you can immediately see the four hot-swap 3.5″ HDD bays. These are fantastic for quickly swapping out a hard disk if one is to suddenly die on you, and is very convenient to have when you want to add/swap/remove disks. Each bay has a power LED (blue) and an access LED (green).

The bottom portion of the front has the status LEDS, power and reset buttons, and two USB 2.0 ports. The USB’s are perfect for installing the OS (not that I used them – see the review about the motherboard here). There are three network activity LED’s just in case you need as many blinky lights as possible! +1 for blinky lights! The LED’s are very bright – keep that in mind if you plan on keeping this in a bedroom.

Here, you can see how the drives are pulled out. You unclip the bay by pushing in the blue tab, and then you can pull the bay out using the arm that springs out. Pushing the bay back in is also very easy – just push the bay back as far as it will go (reasonably gently, mind) and push the arm back into the bay until the blue tab clicks into place. They are a tight fit, so go firmly but gently.

You can also see one of the 4TB Seagate NAS drives. They fit just fine and you can see ventilation holes on the top of the drive bays to keep the drives cool. They seem to stay at about 30-35 degrees C – perfect for hard disks.

Above you can see where the drives connect to the system and the main cooling fan. There are four normal SATA connections and a single Molex power connector. There is not much room back here, so organising four SATA cables, a Molex and a fan cable is rather tricky! When it comes to cooling, the rear fan draws air from the front of the case, through the hard disks, and out the back. Some ventilation is also given for the rest of the system below the hard disks.

The case came with a rear case fan, but it was a 3 pin fan that ran at 5000 RPM. It moved a lot of air, but was very noisy. I have swapped it for an Arctic F8 PWM Rev. 2 80mm fan that I can control with the motherboard. That seems to be plenty for this system, and the fan can ramp up to full speed if things get warm.

This is the side view of the NAS. You can see the four SATA connections for the main hard disks, but we also have a spot for a 2.5″ boot drive! How thoughtful! Since this drive is just for loading the NAS OS (In this case, RockStor), using an SSD would be more reliable since it would see very little usage (unless you shared it’s storage). I have put a 120GB Kingston SSD in since taking the above picture since that 30GB Corsair drive is starting to show some age.

You can see the mainboard at the bottom – and you can see in the next picture how little room there is for the CPU cooler:

That low profile cooler has under a centimeter of clearance left. The stock Intel cooler does not fit. There are ventilation holes above the cooler so it can draw in cool air from above – and I don’t see the CPU going above 60 degC. This is also a reason I went for the low TDP i3 – I knew there would not be much room for heat to build up in that space, so reducing the power output as much as possible was necessary to ensure the system will stay cool.

A power supply came with the case. Yeah. I swapped it out for an 80+ FSP unit. The case accepts “Flex-ATX” or “Flex” power supplies. They are relatively common online if you have a look about.

Much better! Also, you can see the new SSD in this photo. Rewiring this thing with the new power supply was not fun, since the 4-pin CPU power connector is under the hard drive bays.

So, let’s do a conclusion. This is a small NAS case, with a lot of functionality to boot! It has the hot swap bays, status LEDs, sufficient cooling for low-power hardware and can fit four 3.5″ drives with a 2.5″ drive on the inside. With 8TB disks that you can get at time of writing, you could create a 32TB storage box with this thing. In something no much bigger than a couple of shoeboxes.

If I had to criticise the case, it would be that there is pretty much no way to cable manage this thing tidily (then again it is a small case) and that the cover is quite tricky to get back on. Apart from that, it is perfect for a little home NAS.

I give this a rating of 9.5/10, and if four bays isn’t enough for you – check out the 8-bay version from Norco!

14
Apr
2016

2016 NAS: Intel Core i3 4360T

So in this ongoing series, I am reviewing parts of my newly constructed NAS! I have introduced the whole system, and the rockin ASRock server board I am using – but now it’s time to look at the brains of the system:

IMG-20160328-WA0004

That is the Intel Core i3-4360T. I searched around for reviews of this product, but I could only find some for the full 4360 (non-T). This is a 35W TDP chip, and there are some good reasons I went for this:

  • Low power consumption under load
  • Low heat output

The case I am using requires that I use a low-profile CPU cooler. The stock Intel cooler does not fit in the case I’m using (A clone of the Norco ITX-S4). Nonetheless, I did get a stock cooler just in case it happened to fit:

That is the lowest profile stock cooler Intel did, and it is too tall. Instead, I went with the Akasa K25 low profile cooler:

With the fan speed turned down to minimum (~1000 RPM and barely audible), the CPU just about reaches 60 degrees Celsius. That is perfectly fine for me, and the system is going to be spending 99% of it’s time chilling at idle and serving files.

Right, that’s the physical aspects out of the way! Now, it is time to talk about performance!

Geekbench 3 (link):

  • Single core: 2816
  • Multi core: 6080

Comparison: My AMD FX6300 gets 2032 and 7976 respectively. Consider this: this CPU gets 75% of the performance of a six-core AMD chip for a similar price, but at a much, much lower TDP (35W vs 95W).

Considering the low TDP, we can look on Passmark to see the performance-per-watt:

  • i3 4360T Passmark score: 4599 points. 4599/35W = 131.4 points/watt
  • FX6300 Passmark score: 6343 points. 6343/95W = 66.8 points/watt
  • Celeron N2830 Passmark: 993 points. 993/7.5 = 132.4 points/watt

Wow. This thing very almost reaches the power efficiency of my Intel NUC! That is saying a lot, since that thing is a 7.5W part.

As for real world performance, it is total overkill if you are just using this for file-serving. This chip is perfect for a home Plex server. I have used it for on-the-fly transcoding to four devices at once, and it made it look easy. Plex gives you a rough idea that you need 2000 passmark points per 1080p stream, and this thing pushes 4600. Plenty for my needs. And if I do (somehow) need more, I can just upgrade to an i5/i7!

So, overall, this thing is a beast. It has hardware encryption support, virtualisation and all the modern extensions (AVX2 etc), which make video encoding faster. I give this a 10/10 and cannot fault it for the performance it gives. Stay tuned for the next review: the case!

13
Apr
2016

2016 NAS: ASRock Rack E3C224D2I

So lets get started on this 2016 NAS miniseries! If you missed the original post, I have built a NAS with 8TB of storage (soon to be 12TB) for backups, Plex, etc! It’s now time to delve into what makes this NAS a NAS: The motherboard.

I searched far and wide for a mini-ITX motherboard that had a few requirements:

  • At least 5 or 6 SATA ports
  • A PCI-E slot
  • Socket 1150/1151
  • Gigabit NIC(s)
  • Decent quality/server grade

So it turns out, that is not a simple thing to find. Fortunately, ASRock (under their ASRock Rack server-segment name) sell quite a few boards that meet my criteria! They sell the E3C226D2I and the E3C224D2I. Now, the only differences I could find between them was non-ECC RAM support (more on this later) and the number of SATA-III ports you get (the 226 has 6 SATA-III 6Gbit and the 224 has 4 SATA-III 6Gbit and 2 SATA-II 3Gbit).

I decided to go for the 224 since it was a fair bit cheaper and doesn’t lack any features I don’t need from the 226. My boot SSD is only SATA-II so that is not an issue, which left four SATA-III ports for the HDD trays (even though hard drives only barely go above SATA-I speeds :p). I did have to pick up 4GB of Kingston ECC RAM for this board though, but this wasn’t so much that it would have been cheaper to go with the 226. Plus, ECC is nice for running a checksumming filesystem like BTRFS (though it isn’t necessary, it is a nice thing to have).

That said, the board does have not one, not two, but three gigabit ethernet ports!

So, there are two Intel Gigabit jacks and one extra Realtek chip for the motherboards baseboard management. Yes, this board has low-level hardware embedded to control things like powering on the server remotely, and even interacting with the OS over the network. Nuts. Absolutely nuts!

Here is a screenshot of the management page (system is powered off, which is why you can only see standby voltages):

Screenshot from 2016-04-13 17:59:19

It gets better. You can press that “Launch” button with the Java icon near it and everything gets blown out of proportion:

I would like to point out that I never attached a screen to the NAS. This software even lets you mount ISO’s over the network to install the OS.

Yes. That’s right. It’s possible to reinstall your NAS’ OS from anywhere in the world, without even touching the system. If that hasn’t got your nerd juices flowing, I don’t know what would.

Now, the OS can only see the two Intel ports (the two stacked ones). The baseboard management can see the third one off to the side (Realtek gigabit) and the top Intel port, which can be shared between the OS and baseboard management. This allows you to:

  • Have a dedicated cable on a separate VLAN just for the baseboard management, or
  • share the OS’s connection so you only need to use one cable, or
  • completely isolate the baseboard management if you don’t want to use it.

That’s how configurable this system is. Booyah. Also, daisy-chaining your rig to it is also a thing. So is 2Gbit bonded ethernet. Crikey.

You also get dual-DDR3 DIMM slots which allow for up to 16GB 1600MHz DDR3-ECC RAM (at least that’s what it says on the ASRock website, you may be able to install 32GB in this but don’t quote me on that).

As for network performance, it just screams. It just pins out at 112MB/s in simple file transfers, read and writes (more about the performance in later articles!). One day I will be giving this thing 2Gbits to a smart switch and 2Gbits back to my rig. We will see if that starts to make this thing sweat, because right now gigabit just doesn’t touch it.

Oh yeah, that PCI-E x16 slot is just perfect for a 10Gbit card in the future, don’t you think?

Anyway, that’s my overview of the ASRock E3C224D2I. It has not crashed once since I have been using it. Flat out 10/10. Stay tuned in the coming days for more!

04
Apr
2016

The 2016 NAS: Introduction

So, I’ve figured that it is time to build myself a NAS! For far too long I have had about a billion hard drives in my main tower, and I decided that it was time to make myself some centralised storage. Behold!

Welcome to the 2016 NAS! I don’t have a name for it yet, but I’m sure I will think of one (or post suggestions? 😉 ).

Here are the specifications of this here NAS:

  • Intel Core i3 4360T @ 3.2GHz 35W TDP with Akasa K25 low profile cooler
  • AsRock E3C224D2I m-ITX Server board
  • 4GB Kingston DDR3-1600 ECC RAM
  • Logic Case SC-N400 (more or less a clone of the Norco ITX-S4) with included PSU
  • 3 x Seagate NAS 4TB Hard Drives (ST4000VN000)
  • Rockstor NAS OS (uses BTRFS on CentOS)

As you can see, the door at the front of the case locks to prevent the drives being pulled and the power/reset buttons being pressed:

And yes, those are hot-swap drive bays with individual LEDs for power and disk activity! I do love myself some blinky lights! The case has a long row of LEDs on the front: Power, HDD, 3x NIC and an alert LED. Also, there are two USB 2.0 ports in case you want to plug in a boot drive or something along that nature.

So here we can see the empty HDD tray. At the moment, I have 8TB of storage space: 3x4TB in RAID5 = one drive’s worth of redundancy. I can easily throw in another 4TB drive and expand my storage to 12TB, which I will probably do since it increases the value of the storage space quite significantly!

A screenshot of the main Rockstor screen:

I will be writing mini-reviews on each part of this build, starting with the case and delving deep into how to operate the software. I hope this series inspires people in the art of the NAS – if you have any questions along the way you know where to ask!