The 2018 Router – The Motherboard

Now we’re getting into the meat of this project – the single, absolutely most important part of any router: The Motherboard!

This is the Gigabyte GA-J3455N-D3H – a Mini-ITX board with a soldered Intel Celeron J3455 which runs up to 2.3GHz at maximum turbo. This is a quad core part using the Apollo Lake architecture. It is considerably more powerful than the Bay-Trail based N2830 chip that was in my Intel NUC.

This board is by no means a server-oriented board, but it is the only one I could find at a reasonable price that had:

  • A passively cooled CPU that supported AES-NI
  • Two gigabit LAN ports
  • Support for DDR3L SODIMMs – I had old RAM lying around I wanted to use
  • Reasonably good build quality (eg solid capacitors).
  • Low power consumption
  • A low-end soldered CPU. Routers really don’t need much horsepower.

That first bullet point is critical for the latest versions of pfSense and OPNSense. A lot of other boards I considered all ran J1900 parts which don’t support AES-NI. I wanted this system to run the latest versions of software for security purposes – and being stuck at an old version was not an option for me.

On the rear you get a decent selection of IO – PS2 ports, 2x Serial ports, VGA and HDMI, two gigabit LAN ports, two USB 2.0 and two USB 3.1 Gen 1 ports alongside basic audio outputs. The video output ports are perfect for the initial setup of the box.

You get two 1.35V DDR3L slots (good for power consumption), four SATA 6 Gbit/s ports, a USB 3.1 Gen 1 header and two USB 2.0 headers. There are also things like TPM headers and naturally the front panel headers. Present there is also a 24-pin ATX header along the right hand edge and a 4-pin CPU power header (though I’m not sure why this is needed – some other low-power boards don’t seem to bother with this).

The onboard buzzer is quite loud – which is annoying when the board starts up. It is very useful, however, for pfSense and OPNSense to notify you when the system is booted, and when the system is powering down.

There was one, glaring issue with this board that really made me worried about using it… the two gigabit ethernet ports run off Realtek controllers. For FreeBSD-based things like pfSense and OPNSense you really want Intel based NICS.

Fortunately, for this board, the NICs work perfectly with OPNSense. They show up and work just fine – and the performance is very good (more on this in future articles). I must admit I was relived to see them working without manually shoehorning in drivers – otherwise the board would have been returned to sender!

What’s it like whilst its running – I hear you ask? Does that puny heatsink and case ventilation mentioned yesterday do enough to cool that 10W monster of a CPU? Well, yes actually. During normal routing duties I don’t see anything above 47-48 degrees C, and so far after two months not once has it let me down. It’s been absolutely rock solid running OPNSense – though I had a lot of challenges getting the install completed… that’s a saga meaty enough for it’s own post! I could not for the life of me get pfSense installed – so I moved to OPNSense (which is basically the same thing).

So finally, time to conclude my experience using this board as my routing platform over the last couple of months:

The good:

  • It meets my requirements as discussed earlier
  • Considering it includes a CPU, it is well priced (~£80)
  • Excellent build quality
  • Silent operation
  • Sips power

And the not-so-good:

  • Realtek NICs – not the best for pfSense and OPNSense.
  • No ECC support… not the end of the world though
  • BIOS buzzer really goes wild at boot with no keyboard, mouse or screen attached
  • Not server-grade – something you want with critical 24/7 equipment.
  • pfSense wouldn’t play ball at all
  • OPNSense was a bit tricky to get installed and needed some crowbar-ing

So yeah – it’s worked out to be a good board for OPNSense once the install is done. That’s the most headachey part of this project – getting the OS installed. After that – it’s been smooth sailing. More to come on performance, power and the software!


The 2018 Router – The Case

Hello again! I see you’re back for some more 2018 router goodness… time to talk about the enclosure!

Here, we have the M350 Universal Mini-ITX Enclosure – selected for primarily it’s small size. It is covered in ventilation holes on the front, the top and the sides – which is essential for this build. I figured that (most) other routers don’t have cooling fans so why should mine need one? It’s just another part to go wrong in my opinion – plus without a router you basically have no internet. Can’t have that now, can we?

Internally the case does not have very much – it’s basically a motherboard tray with a lid. With a Mini-ITX board installed you can see that it could not possibly get any smaller! The CPU in my system is passively cooled – the ventilation in the case is important for natural convection to do its thing. I can happily report that running temperatures are very respectable!

The back of the case has a standard motherboard IO panel fitting and a spot for a DC-in connector. The case does not come with any DC connector fittings – my PSU came with one that fit the mounting holes on the back perfectly. There is no PCI slot space – though you may be able to use a PCI card with the bracket taken off if it doesn’t interfere with the DC jack.

The front cover comes off to expose two right-angled USB ports on the inside. This is really handy for things like Home-Theater setups where things like keyboard & mouse dongles can be hidden. I could have put the USB drive on the inside of here to keep it hidden and inaccessible (useful if you have kids, for instance) – but I kept it plugged into the back for USB 3.0.

This front PCB also has a little jumper that, when enabled, will start up the PC when it receives power by shorting the power button pins (simulating pressing the button). It must work off the 5V USB supply. This can be very handy if your motherboard does not support powering on when AC power is connected. Fortunately, mine does – so I don’t need this. It is very handy for people whose installations require the PC to always be on. I’d logically expect it to check to see if the power LED is lit – if not then it presses the power button (not tested it but this is how I’d expect it to work).

There is also a power LED (blue) and HDD activity LED (red I think – not sure since I don’t have an internal drive).

Speaking of internal drives – there is a metal bracket included that sits over the top of the motherboard. You can use this to mount a 2.5″ drive or 40mm fans – here’s a link to a page showing the different configurations. There is also a front fan intake spot for a 40mm fan.

The case can be DIN rail mounted or VESA mounted onto the back of a monitor – though the brackets required are separate addons.

Overall, I am very happy with this case! It fits all the requirements I had for this system:

  • Small size
  • Plenty of ventilation
  • No fans needed
  • Reasonably priced (£36 from mini-itx.com).

The only quirks I had with this case:

  • No PCI support in the slightest
  • The front panel connectors (eg power button) leads are a bit short

A great case for a router! Next up, is the most important part of this whole project – the motherboard!


2018 Router Introduction

2018 has arrived, and with it a myriad of site issues that I have only just now got around to fixing (eg broken comments). Some big changes have been going on in my personal life – one of which is moving into my own place!

With that, it means I now have full control over what I get to have in my pad! So, it’s time to have something I’ve wanted for a good while… my own custom network infrastructure!


Above – my new network setup. The bottom shelf is the new router (topic of the next series of posts) and the network switch. On the next shelf up is the 2016 NAS (Still going strong!) Above that is a network printer that was too big to get in shot.

So… the internet comes into the living room, into the modem (more on this later) and I have a 15m flat white Ethernet “Cat7” lead running around the skirting boards into the office. I’ve always subscribed to the fact that hardwired ethernet is the most reliable form of connection. WiFi is… well… WiFi and I have had issues with Powerline in the past – so the best way to go is hardwired ethernet.

A closer look at the box – the case is the “M350 Universal Mini-ITX Enclosure” that can be found at various sites. It’s probably the smallest Mini-ITX case I could find!

This should give you an idea of just how small the case is – it’s barely large enough to fit an Mini-ITX motherboard. The one you are looking at is the Gigabyte GA-J3455N-D3H – with a soldered Intel Celeron J3455. I will go into the reasons why I picked this one in the next posts about each component!

For power I’m using a Pico-PSU and an external 12V FSP power brick – I’m not asking much from them in terms of power so they should last a decent length of time! The RAM is just a 4GB stick of DDR3L that I had lying around that I could throw in.

The back of the unit – barely large enough to fit the standard IO shield! You can also see the DC-in jack on the rear too. We have on the back a couple of video outputs for the initial setup of the box and, most importantly, two gigabit LAN ports for the networking (one for the internal LAN, and one going back to the router). I have loaded up OpenSense onto the USB drive plugged into the back – once it’s booted it doesn’t really need to access the drive so a cheap 32GB USB drive is all that’s needed there.

The main interface of OpenSense is above – I really like the interface and flexibility you get! I have had to learn (often the hard way) on how to properly set up the network. I’ll go into more details in future topics!

That’s it for now – a bit of a teaser! Over the next few days I will be uploading a series of posts about this box and my network!


Vega 56 Introduction

I’ve finally upgraded the last piece in Flammenwerfer – my 2017 build using AMDs Ryzen. The GTX 960 served me very well, but alas, it had no place in an 8-core, 16 thread system. I opted to go for Vega 56 – this article will go into the details why I went with Vega 56 and my first impressions, and later on I will delve into the performance metrics.

The main problem with RX Vega 56 (and the 64) at the moment is the pricing. As of writing this article, prices seem to range from £450-550 (which is ridiculous). I was very lucky to spot a Powercolor RX Vega 56 for £380 – which is getting very close to MSRP levels. Since then, prices have gone back up – so I am very glad I went for the card when I spotted it.

A very tasty upgrade over the ASUS GTX 960 Strix (pictured above the Vega 56). I personally like the look of Vega 56 – though some have said they find it boring. The RADEON logo on the top side lights up red, with the absolute perfect brightness of red. In my build (pictured at the top), I have gone for a black and red theme, to which this card fits absolutely spot on.

Why Vega then? Why didn’t you get a 1070? My reasons, in no particular order:

  • My LG 29UM68 supports FreeSync
  • The reference cooler matches my build (no aftermarket coolers yet 🙁 )
  • I get to use all of my CableMod custom cables
  • The performance is awesome – more than plenty for my needs
  • New architecture – better support for new APIs (eg, Vulkan)
  • AyyyyMD (I am a bit of an AMD fanboy – supporting the underdog!)

My first initial gripes:

  • I’m fairly positive I’ve heard jet engines quieter than this card under load
  • The drivers are very beta at the time of writing
  • The power draw is a tad high (especially compared to a 1070)

Fortunately, the card is extremely quiet at idle, and the idle power consumption is very good (on par with the 960). The 650W power supply for this system is perfect – with everything maxed out (and I mean maxed, the most I have been able to record at the wall is 550W AC – about 500W DC). Usually whilst gaming I see about 300W.

As for the noise under load, it’s definitely noticeable. It’s more of a “whoosh” rather than a “whine” – although it can get quite loud, its not annoying. Wearing headphones or having the sound turned up in a game quickly removes the noise issue for me. I have, however, discovered the best way to get rid of the noise – and that is to use FreeSync.

From AMDs marketing slides on FreeSync

Holy balls. FreeSync. It’s the future, man. This LG 29UM68 has a variable refresh range from 40 to 75Hz – where essentially the monitor will refresh at the same rate that the GPU can render frames. This gets rid of tearing when V-Sync is disabled, and removes latency when V-Sync is enabled (since the monitor isn’t duplicating frames when the frame rate dips below 60Hz).

The trick, I have found, is to enable FreeSync in the AMD Driver, then use V-Sync enabled in games with the maximum refresh rate set to 75Hz. This then caps the frame rate to 75FPS and allows it to dip below without encountering stuttering and latency issues. Additionally, this means the GPU isn’t cranked out at 100% all the time – resulting in lower power usage, thermal output and fan speeds!

FreeSync is one of those technologies you have to see to believe. It’s a night and day improvement over a standard 60Hz experience. My monitor does 75Hz, which is immediately waaay smoother than 60, but you can find screens quite easily that will do 144Hz. G-Sync offers a similar feature-set to FreeSync, except due to it’s proprietary nature the panels all cost a significant portion more compared to the equivalent FreeSync displays. So you can either spend more on Vega and get a cheaper display, or spend less on an Nvidia card and pay more for the display…. nice.

In the next posts I will talk about the performance and overclocking!



An addition to the family

Powaaar! Vega baby!


AMD Ryzen 7 1700X: Overclocking on Water

So as I’ve mentioned a while back, I have a Deepcool “GamerStorm” Captain 360 EX now added to Flammenwerfer. Seeing as it has three 120mm fans on the radiator with the sole purpose of removing heat, it would be simply rude not to overclock the 1700X. I mean – free performance? Sign me up 😀

At stock settings the CPU barely reaches 45 degrees C – and scores ~15000 in PerformanceTest:

At 3.825 GHz (my daily OC) I get just shy of ~16,000:

And at the highest I was able to push the CPU at 4GHz I got about 16,400:

I have found 3.825GHz to be the sweet spot for my daily needs – at this frequency I can run at 1.30 V with perfect stability. As I started to push the frequency past this point, the voltage required seems to jump up significantly. Below is a table of what I noticed:

  • Stock (3.4GHz) – ~1.25V
  • 3.7 GHz – 1.25V
  • 3.825 GHz – 1.30V
  • 3.9 GHz – 1.4V
  • 4.0 GHz – 1.45V

3.9 and 4 GHz were only achieved with some pretty hefty LLC (Load-Line Calibration) enabled (this boosts the voltage to the CPU to help counteract voltage drop under load) – this ramps up CPU and especially VRM temperatures quite a bit. With the fans set to 100% they will spin at ~1800 RPM (AKA loud as heck) and this does keep the CPU and VRM comfortable (about 70 for the CPU and 80 for the VRM). For daily use though its a bit too loud.

ASRock’s latest BIOS revisions seem to have much, much better fan control options. I’ve slightly customised the “Quiet” preset to have a slower fan speed at idle (since it wasn’t quiet enough) and to spin the fans up when things warm up. At 3.825 GHz @ 1.3V the CPU stays frosty and the VRM hovers around the 90 degC range (perfectly acceptable when they are designed to reach 125 degC!).

The fans reach about 1000 RPM at the very most – at this speed they are audible but the H440 does a good job of keeping the noise down. The pump runs at about 2300 RPM and is very quiet – it can’t be heard over anything else in the system.

To get 4.0 GHz I ran the fans at 100% – the temperatures were still great but it was that VCore of 1.45V I had to set for it to be stable (even then it would occasionally crash) and it took heavy LLC to get benchmarks to run. I was seeing 1.46-1.47V being applied which is far too much for daily use – long term operation at those voltages degrades the chip to the point where it needs more and more voltage to reach higher clocks. It’s not worth it for me to go to such extremes over the sake of 175 MHz.

Ultimately the Captain 360 EX is the star of the show here – this cooler is totally overkill for this chip but it gets the job done with the fans set to low speeds. Ryzen 7 when overclocked is an absolute powerhouse, but ultimately I feel the same as many other critics do – I wish we were able to reach 4GHz+ like in the Bulldozer/Piledriver days. It seems now we are back to how Phenom II used to behave – 4GHz just needs too much.


2017 has been the most interesting year for PC tech in a long time

And I really mean it. Its been a very interesting year – mainly thanks to the return of AMD.

You cannot deny that there has been a certain level of stagnation in the processor market for the last five (plus) years. Intel just plodded on releasing the same quad core chips with marginal improvements each generation – which in all fairness; did work out well for them.

A while back I recently upgraded my main system to a Ryzen 7 build after spending so long with an FX based system. Being a student at the time when I put the FX system together – being able to build a six core system for under £200 was absolutely fantastic. The performance was, for the time, fairly decent – but I don’t think I’ll ever experience such value for money again.

Unfortunately, AMD just didn’t follow up on the FX line. It’s (from a consumer standpoint) like they stopped altogether on the higher-end processor market. This is what gave Intel the increased dominance as of recent years.

However great it is for a company to have that “monopoly”, ultimately it’s the consumers that suffer. Some well placed competition in the market is what drives innovation forwards, and it’s clear to everyone and their dog that Intel is only this year willing to start innovating again.

I’m talking about Coffee Lake of course – finally bringing an end to dual core chips and moving the consumer market onto six-core chips for the mainstream market. Heck even the i3 is a quad core, and the i5 and i7s are six core chips. About damn time too – the industry has been yearning for higher and higher core count chips for the last few years.

This is what good competition brings for consumers – technology advances to outdo competitors. We need companies like AMD to shake things up, to be the underdog, the usurper. I, myself, own both AMD and Intel systems and personally I just want the best product from either company that suits my use case. Intel’s i3 is what powers my 2016 NAS, and it’s bloody awesome for the low power consumption. And I went with Ryzen since it’s damn good performance for the money.

So this is why this year has been so interesting. AMDs release has really shaken up the industry in a way that I haven’t seen for… Well since ever really. Even if you’re an Intel user, this can only be a good thing.

That’s my little “r/ShowerThoughs” inspired take on 2017. Here’s to innovation and progress!


USB 3.1 10 Gigabit – Why?

My shiny new ASRock X370 Fata1ty Gaming K4 supports USB 3.1 10 Gigabit through a standard type A port and a fancy new type-C port. Having owned the new Samsung Galaxy A5 2017 for a while now (which has a type-C port) and enjoying the new standard, I decided to grab a USB 3.1 type-C SSD enclosure from Sharkoon to use as fast portable storage.

I put my Kingston HyperX Fury 240GB SSD in the enclosure and did a few speed tests over the range of USB standards:

Naturally I did not bother going below USB 2.0 since nobody in their right minds would use USB 1.1 for storage purposes.

Here we can see that we are bottlenecked even at USB 3.0 5 Gigabit/s at 420 MB/s. This makes sense since the SSD is rated at 6 Gigabit/s – its only until we get to USB 3.1 10 Gbit/s that we can realise the full potential of the HyperX SSD (which is rated at 500 MB/s). USB 2.0 is left in the dust at this point.

So we now can utilise an extra ~ 80MB/s out of an SSD. Whoop-de-doo. My question is – really how much does this affect things in the real world from USB 3.0 5 Gigabit/s? I have an install of Ubuntu Linux on the SSD that I will use to measure the difference in boot times. I can negate out BIOS POST times since I can use the GRUB boot menu to reliably time from that point:

  • USB 3.1 10 Gigabit/s: 16.40s
  • USB 3.0 5 Gigabit/s: 16.51s
  • USB 2.0 480 Megabit/s: 28.69s

Well there you go – you won’t see much difference going from 5 gigabits to 10 gigabits if you’re running 1 SSD in an external enclosure. It’s apparent however that USB 2.0 is a severe bottleneck in this case – so moving to at least 3.0 is definitely worth it. After that – not really.

The main selling point of that Sharkoon enclosure for me is the type-C interface. Before, I was using USB micro-B 3.0 – a truly awful connector that never should have existed in my opinion. Type-C is right now the best that you can get, and I look forward to it becoming ubiquitous in the future.


AGESA – ASRock Fatal1ty X370 Gaming K4

I’ve been keeping track of the latest BIOS releases for my shiny new rig – Flammenwerfer. As of then – I could only manage DDR4-2400 reliably on my Hynix-based set of Corsair Vengeance LED memory.

I grabbed the update – and some things have definitely improved:

These are the best settings I can currently achieve. For one thing, the RAM runs at DDR4-2666 now at 16, 17, 17, 35 timings. The kit itself, however, is rated at DDR4-3000 @ 15, 17, 17, 35 and I was not able to reliably get CAS 15 to post first time at 2666MHz.

Unfortunately, I cannot get the DDR4-2933 XMP profile to POST at all. Even with higher memory voltages and slower timings – it’s not going to budge. I did try DDR4-2800, but even then there was no dice.

The BIOS has been updated with waaaaaay more memory tweaking options – so I bet ASRock will be going through and working out more appropriate memory profiles for Hynix-based RAM. I’m hoping for some future BIOS update to magically allow me to get to that magic DDR4-2933 speed!

So some progress from AMD has definitely been made – my max overclock has gone up 100MHz @ 1.4V to 3.9GHz:

And yes – that voltage reading is still totally wrong. As for the “Bench” tab:

Against an Intel 6950X at stock – this thing gets scarily close considering that is a ~£1600 part.

I’ve only been using this new AGESA code for the last day or two – so far everything seems to be rock-stable. I’m still able to game, render video – do all the usual high-load stuff without things falling over.

So that’s my brief two cents on this matter – I am glad that ASRock are keeping on top of the updates. They do seem to take a bit longer than other vendors, but at least when they are released they seem to be stable. Which is only a good thing!


AMP support!

Just a quick blog update: I’ve got AMP enabled and it seems to work great! If you’re on a mobile device and find it tricky to navigate the site, put /amp on the end of the URL.

For example:


I’m going to see if I can get a little button at the top of the pages to jump to the amp equivalent. Let me know if this helps out 😊