30
Mar
2018

2018 Router Introduction

2018 has arrived, and with it a myriad of site issues that I have only just now got around to fixing (eg broken comments). Some big changes have been going on in my personal life – one of which is moving into my own place!

With that, it means I now have full control over what I get to have in my pad! So, it’s time to have something I’ve wanted for a good while… my own custom network infrastructure!

20180330_171418

Above – my new network setup. The bottom shelf is the new router (topic of the next series of posts) and the network switch. On the next shelf up is the 2016 NAS (Still going strong!) Above that is a network printer that was too big to get in shot.

So… the internet comes into the living room, into the modem (more on this later) and I have a 15m flat white Ethernet “Cat7” lead running around the skirting boards into the office. I’ve always subscribed to the fact that hardwired ethernet is the most reliable form of connection. WiFi is… well… WiFi and I have had issues with Powerline in the past – so the best way to go is hardwired ethernet.

A closer look at the box – the case is the “M350 Universal Mini-ITX Enclosure” that can be found at various sites. It’s probably the smallest Mini-ITX case I could find!

This should give you an idea of just how small the case is – it’s barely large enough to fit an Mini-ITX motherboard. The one you are looking at is the Gigabyte GA-J3455N-D3H – with a soldered Intel Celeron J3455. I will go into the reasons why I picked this one in the next posts about each component!

For power I’m using a Pico-PSU and an external 12V FSP power brick – I’m not asking much from them in terms of power so they should last a decent length of time! The RAM is just a 4GB stick of DDR3L that I had lying around that I could throw in.

The back of the unit – barely large enough to fit the standard IO shield! You can also see the DC-in jack on the rear too. We have on the back a couple of video outputs for the initial setup of the box and, most importantly, two gigabit LAN ports for the networking (one for the internal LAN, and one going back to the router). I have loaded up OpenSense onto the USB drive plugged into the back – once it’s booted it doesn’t really need to access the drive so a cheap 32GB USB drive is all that’s needed there.

The main interface of OpenSense is above – I really like the interface and flexibility you get! I have had to learn (often the hard way) on how to properly set up the network. I’ll go into more details in future topics!

That’s it for now – a bit of a teaser! Over the next few days I will be uploading a series of posts about this box and my network!

03
Dec
2017

Vega 56 Introduction

I’ve finally upgraded the last piece in Flammenwerfer – my 2017 build using AMDs Ryzen. The GTX 960 served me very well, but alas, it had no place in an 8-core, 16 thread system. I opted to go for Vega 56 – this article will go into the details why I went with Vega 56 and my first impressions, and later on I will delve into the performance metrics.

The main problem with RX Vega 56 (and the 64) at the moment is the pricing. As of writing this article, prices seem to range from £450-550 (which is ridiculous). I was very lucky to spot a Powercolor RX Vega 56 for £380 – which is getting very close to MSRP levels. Since then, prices have gone back up – so I am very glad I went for the card when I spotted it.

A very tasty upgrade over the ASUS GTX 960 Strix (pictured above the Vega 56). I personally like the look of Vega 56 – though some have said they find it boring. The RADEON logo on the top side lights up red, with the absolute perfect brightness of red. In my build (pictured at the top), I have gone for a black and red theme, to which this card fits absolutely spot on.

Why Vega then? Why didn’t you get a 1070? My reasons, in no particular order:

  • My LG 29UM68 supports FreeSync
  • The reference cooler matches my build (no aftermarket coolers yet 🙁 )
  • I get to use all of my CableMod custom cables
  • The performance is awesome – more than plenty for my needs
  • New architecture – better support for new APIs (eg, Vulkan)
  • AyyyyMD (I am a bit of an AMD fanboy – supporting the underdog!)

My first initial gripes:

  • I’m fairly positive I’ve heard jet engines quieter than this card under load
  • The drivers are very beta at the time of writing
  • The power draw is a tad high (especially compared to a 1070)

Fortunately, the card is extremely quiet at idle, and the idle power consumption is very good (on par with the 960). The 650W power supply for this system is perfect – with everything maxed out (and I mean maxed, the most I have been able to record at the wall is 550W AC – about 500W DC). Usually whilst gaming I see about 300W.

As for the noise under load, it’s definitely noticeable. It’s more of a “whoosh” rather than a “whine” – although it can get quite loud, its not annoying. Wearing headphones or having the sound turned up in a game quickly removes the noise issue for me. I have, however, discovered the best way to get rid of the noise – and that is to use FreeSync.

From AMDs marketing slides on FreeSync

Holy balls. FreeSync. It’s the future, man. This LG 29UM68 has a variable refresh range from 40 to 75Hz – where essentially the monitor will refresh at the same rate that the GPU can render frames. This gets rid of tearing when V-Sync is disabled, and removes latency when V-Sync is enabled (since the monitor isn’t duplicating frames when the frame rate dips below 60Hz).

The trick, I have found, is to enable FreeSync in the AMD Driver, then use V-Sync enabled in games with the maximum refresh rate set to 75Hz. This then caps the frame rate to 75FPS and allows it to dip below without encountering stuttering and latency issues. Additionally, this means the GPU isn’t cranked out at 100% all the time – resulting in lower power usage, thermal output and fan speeds!

FreeSync is one of those technologies you have to see to believe. It’s a night and day improvement over a standard 60Hz experience. My monitor does 75Hz, which is immediately waaay smoother than 60, but you can find screens quite easily that will do 144Hz. G-Sync offers a similar feature-set to FreeSync, except due to it’s proprietary nature the panels all cost a significant portion more compared to the equivalent FreeSync displays. So you can either spend more on Vega and get a cheaper display, or spend less on an Nvidia card and pay more for the display…. nice.

In the next posts I will talk about the performance and overclocking!

 

20
Nov
2017

An addition to the family

Powaaar! Vega baby!

06
Nov
2017

AMD Ryzen 7 1700X: Overclocking on Water

So as I’ve mentioned a while back, I have a Deepcool “GamerStorm” Captain 360 EX now added to Flammenwerfer. Seeing as it has three 120mm fans on the radiator with the sole purpose of removing heat, it would be simply rude not to overclock the 1700X. I mean – free performance? Sign me up 😀

At stock settings the CPU barely reaches 45 degrees C – and scores ~15000 in PerformanceTest:

At 3.825 GHz (my daily OC) I get just shy of ~16,000:

And at the highest I was able to push the CPU at 4GHz I got about 16,400:

I have found 3.825GHz to be the sweet spot for my daily needs – at this frequency I can run at 1.30 V with perfect stability. As I started to push the frequency past this point, the voltage required seems to jump up significantly. Below is a table of what I noticed:

  • Stock (3.4GHz) – ~1.25V
  • 3.7 GHz – 1.25V
  • 3.825 GHz – 1.30V
  • 3.9 GHz – 1.4V
  • 4.0 GHz – 1.45V

3.9 and 4 GHz were only achieved with some pretty hefty LLC (Load-Line Calibration) enabled (this boosts the voltage to the CPU to help counteract voltage drop under load) – this ramps up CPU and especially VRM temperatures quite a bit. With the fans set to 100% they will spin at ~1800 RPM (AKA loud as heck) and this does keep the CPU and VRM comfortable (about 70 for the CPU and 80 for the VRM). For daily use though its a bit too loud.

ASRock’s latest BIOS revisions seem to have much, much better fan control options. I’ve slightly customised the “Quiet” preset to have a slower fan speed at idle (since it wasn’t quiet enough) and to spin the fans up when things warm up. At 3.825 GHz @ 1.3V the CPU stays frosty and the VRM hovers around the 90 degC range (perfectly acceptable when they are designed to reach 125 degC!).

The fans reach about 1000 RPM at the very most – at this speed they are audible but the H440 does a good job of keeping the noise down. The pump runs at about 2300 RPM and is very quiet – it can’t be heard over anything else in the system.

To get 4.0 GHz I ran the fans at 100% – the temperatures were still great but it was that VCore of 1.45V I had to set for it to be stable (even then it would occasionally crash) and it took heavy LLC to get benchmarks to run. I was seeing 1.46-1.47V being applied which is far too much for daily use – long term operation at those voltages degrades the chip to the point where it needs more and more voltage to reach higher clocks. It’s not worth it for me to go to such extremes over the sake of 175 MHz.

Ultimately the Captain 360 EX is the star of the show here – this cooler is totally overkill for this chip but it gets the job done with the fans set to low speeds. Ryzen 7 when overclocked is an absolute powerhouse, but ultimately I feel the same as many other critics do – I wish we were able to reach 4GHz+ like in the Bulldozer/Piledriver days. It seems now we are back to how Phenom II used to behave – 4GHz just needs too much.

16
Aug
2017

2017 has been the most interesting year for PC tech in a long time

And I really mean it. Its been a very interesting year – mainly thanks to the return of AMD.

You cannot deny that there has been a certain level of stagnation in the processor market for the last five (plus) years. Intel just plodded on releasing the same quad core chips with marginal improvements each generation – which in all fairness; did work out well for them.

A while back I recently upgraded my main system to a Ryzen 7 build after spending so long with an FX based system. Being a student at the time when I put the FX system together – being able to build a six core system for under £200 was absolutely fantastic. The performance was, for the time, fairly decent – but I don’t think I’ll ever experience such value for money again.

Unfortunately, AMD just didn’t follow up on the FX line. It’s (from a consumer standpoint) like they stopped altogether on the higher-end processor market. This is what gave Intel the increased dominance as of recent years.

However great it is for a company to have that “monopoly”, ultimately it’s the consumers that suffer. Some well placed competition in the market is what drives innovation forwards, and it’s clear to everyone and their dog that Intel is only this year willing to start innovating again.

I’m talking about Coffee Lake of course – finally bringing an end to dual core chips and moving the consumer market onto six-core chips for the mainstream market. Heck even the i3 is a quad core, and the i5 and i7s are six core chips. About damn time too – the industry has been yearning for higher and higher core count chips for the last few years.

This is what good competition brings for consumers – technology advances to outdo competitors. We need companies like AMD to shake things up, to be the underdog, the usurper. I, myself, own both AMD and Intel systems and personally I just want the best product from either company that suits my use case. Intel’s i3 is what powers my 2016 NAS, and it’s bloody awesome for the low power consumption. And I went with Ryzen since it’s damn good performance for the money.

So this is why this year has been so interesting. AMDs release has really shaken up the industry in a way that I haven’t seen for… Well since ever really. Even if you’re an Intel user, this can only be a good thing.

That’s my little “r/ShowerThoughs” inspired take on 2017. Here’s to innovation and progress!

16
Jul
2017

USB 3.1 10 Gigabit – Why?

My shiny new ASRock X370 Fata1ty Gaming K4 supports USB 3.1 10 Gigabit through a standard type A port and a fancy new type-C port. Having owned the new Samsung Galaxy A5 2017 for a while now (which has a type-C port) and enjoying the new standard, I decided to grab a USB 3.1 type-C SSD enclosure from Sharkoon to use as fast portable storage.

I put my Kingston HyperX Fury 240GB SSD in the enclosure and did a few speed tests over the range of USB standards:

Naturally I did not bother going below USB 2.0 since nobody in their right minds would use USB 1.1 for storage purposes.

Here we can see that we are bottlenecked even at USB 3.0 5 Gigabit/s at 420 MB/s. This makes sense since the SSD is rated at 6 Gigabit/s – its only until we get to USB 3.1 10 Gbit/s that we can realise the full potential of the HyperX SSD (which is rated at 500 MB/s). USB 2.0 is left in the dust at this point.

So we now can utilise an extra ~ 80MB/s out of an SSD. Whoop-de-doo. My question is – really how much does this affect things in the real world from USB 3.0 5 Gigabit/s? I have an install of Ubuntu Linux on the SSD that I will use to measure the difference in boot times. I can negate out BIOS POST times since I can use the GRUB boot menu to reliably time from that point:

  • USB 3.1 10 Gigabit/s: 16.40s
  • USB 3.0 5 Gigabit/s: 16.51s
  • USB 2.0 480 Megabit/s: 28.69s

Well there you go – you won’t see much difference going from 5 gigabits to 10 gigabits if you’re running 1 SSD in an external enclosure. It’s apparent however that USB 2.0 is a severe bottleneck in this case – so moving to at least 3.0 is definitely worth it. After that – not really.

The main selling point of that Sharkoon enclosure for me is the type-C interface. Before, I was using USB micro-B 3.0 – a truly awful connector that never should have existed in my opinion. Type-C is right now the best that you can get, and I look forward to it becoming ubiquitous in the future.

10
Jun
2017

AGESA 1.0.0.6 – ASRock Fatal1ty X370 Gaming K4

I’ve been keeping track of the latest BIOS releases for my shiny new rig – Flammenwerfer. As of then – I could only manage DDR4-2400 reliably on my Hynix-based set of Corsair Vengeance LED memory.

I grabbed the update – and some things have definitely improved:

These are the best settings I can currently achieve. For one thing, the RAM runs at DDR4-2666 now at 16, 17, 17, 35 timings. The kit itself, however, is rated at DDR4-3000 @ 15, 17, 17, 35 and I was not able to reliably get CAS 15 to post first time at 2666MHz.

Unfortunately, I cannot get the DDR4-2933 XMP profile to POST at all. Even with higher memory voltages and slower timings – it’s not going to budge. I did try DDR4-2800, but even then there was no dice.

The BIOS has been updated with waaaaaay more memory tweaking options – so I bet ASRock will be going through and working out more appropriate memory profiles for Hynix-based RAM. I’m hoping for some future BIOS update to magically allow me to get to that magic DDR4-2933 speed!

So some progress from AMD has definitely been made – my max overclock has gone up 100MHz @ 1.4V to 3.9GHz:

And yes – that voltage reading is still totally wrong. As for the “Bench” tab:

Against an Intel 6950X at stock – this thing gets scarily close considering that is a ~£1600 part.

I’ve only been using this new AGESA code for the last day or two – so far everything seems to be rock-stable. I’m still able to game, render video – do all the usual high-load stuff without things falling over.

So that’s my brief two cents on this matter – I am glad that ASRock are keeping on top of the updates. They do seem to take a bit longer than other vendors, but at least when they are released they seem to be stable. Which is only a good thing!

25
May
2017

AMP support!

Just a quick blog update: I’ve got AMP enabled and it seems to work great! If you’re on a mobile device and find it tricky to navigate the site, put /amp on the end of the URL.

For example:

http://www.thedevilonholiday.co.uk/flammenwerfer-some-progress/amp/

I’m going to see if I can get a little button at the top of the pages to jump to the amp equivalent. Let me know if this helps out 😊

23
May
2017

Flammenwerfer: Some Progress

So its been a while since I got my hands on Ryzen and I thought its time to fill you all in with the grizzly details! I have done some work on Flammenwerfer, though with Ryzen being a new platform and all, I have had a couple of issues.

I can confidently say that at the moment, memory support is a bit of a nightmare. I am using Corsair Vengeance LED 3000MHz DDR4 modules – these turned out to be Hynix-based. This means I can only really get 2666MHz to run at all.

Except there’s another problem. The cold-boot memory bug that’s plaguing many people. This means that I can boot the system at 2133MHz, use the system for 15 minutes, set the speed to 2666MHz and all will be fine. Except, the system will fail to POST and revert back to defaults next time the power is cycled. This clearly is not a viable option moving forwards, so at the moment 2400MHz is what I am running.

600MHz below their rated speed 🙁

The other problem I am currently facing is the lack of a built-in Linux driver for the built in RAID controller. I am booting from two 240GB SanDisk SSD’s in RAID0, and Linux cannot see that array or any of the other hard drives at the moment. There are drivers available online, but it’s rather tricky to get them installed properly. I may just grab a small mSATA drive for Linux and call it a day.

As for the hardware, some other changes have been made. Well, I did want to change the AIO didn’t I? Weeeeeell I might have gone slightly overboard…

That’s the Deepcool “GamerStorm” Captain 360 EX. I will be writing a full review on this – but the short version goes like this:

  • It’s a 360mm AIO (3x 120mm fans on the radiator)
  • For me, I had to get the AM4 mounting harware seperately, but I believe that newer units have it included as standard
  • It was an absolute nightmare to fit in the top of the H440 – but it fits very nicely
  • The tubes are very long – designed to reach the front of the case. It is a very good job I went for a case with no drive bays – they would have interfered with the tubing.
  • The tubing is wrapped with black paracord-type material which is very nice and feels good quality – but it makes the tubing a bit on the stiff side
  • For me, the tubing layout ended looking great. The RAM isn’t covered and nothing is stressed.
  • The pump has a red LED ring light
  • The fans spin from 500 – 1800 RPM and the pump you just leave at full speed. It’s pretty much silent.

Yes, it’s total overkill. It also keeps my 1700X under 55 degC when under full load at stock, and under 60 degC when overclocked to 3.9GHz.

The last thing I have added… well it’s 2017 – so it would be rude not to add some RGB lighting!

The ASRock RGBLED software, though basic, lets you pick from several colour modes until you find the one that suits you best. For me, it’s a static dim red:

Some video of it all in action:

I taped over the incredibly obnoxious white LED on the ASUS Strix GPU… that thing was born from the fiery core of the sun itself.

The last upgrade will be the GPU – currently it’s an ASUS Strix GTX 960 4GB. It’s not the most powerful card in the world, and with the 1700X in there it looks a bit out of place. Flammenwerfer wouldn’t be complete if it had a certain upcoming GPU… damnit AMD I am always waiting for you to release things!

Vega, I am ready.

19
Apr
2017

The AMD Ryzen 7 1700X: Power or Bust?

So it is time to dig into the new build: Flammenwerfer! First up is the AMD Ryzen 7 1700X – does upgrading from a 6-core FX6300 to an 8-core 1700X yield any kind of benefit?

…erm…yes?!?

I was not prepared. Not prepared at all for the sheer speed of the new Ryzen architecture. This thing quite literally stomps all over the old FX6300 in every single metric. Even when the FX6300 was overclocked to 4.2GHz. The tests I’ve performed today have been to compare the FX6300 stock-for-stock against the 1700X. Not fair, I know, but it’s all I have to compare.

Don’t believe me? Here – have some Geekbench results:

So, we are looking at a 79% increase in single core performance and a 285% increase in multi-core performance in Geekbench. Some call it a modest performance jump, I call it turning the dial right past 11 and breaking it off the stereo.

In Passmark, the FX6300 scored 6573 points. Not bad for a sub £100 CPU, but pales in comparison to the monstrous score of 14707 that the 1700X puts out.

Let’s do my classice perf/watt test, using TDPs and Passmark scores:

  • AMD Phenom II X4 965 BE: 4222 / 140W TDP = 30.2 points/watt
  • AMD Sempron 145: 803 / 45W = 17.8 points/watt
  • AMD FX6300: 6573 points / 95W TDP = 69.2 points/watt
  • Intel Core i3 4360T: 4599 / 35W TDP = 131.4 points/watt
  • AMD Ryzen 7 1700X: 14707 / 95W TDP = 154.8 points/watt

So there we have it folks! A new performance/watt winner out of my current systems! However, in terms of price/performance – the FX6300 still wins here, more from it’s own low price. Have a look at the Intel X99 parts that the 1700X battles against… well it’s no contest really.

If we look at the 6850k from Intel – that scores a similar number of points on CPUbenchmark.net – but that’s a £620 part. Plus, in most instances, the 1700X has been seen to even make the 6900k look uncomfortable in terms of performance at it’s £1000 price tag.

Cinebench R15 yielded a score of 415 for the FX6300. 1700X? 1549. Yeah, almost 4x the score. At the same wattage. Absolutely nuts.

Gaming is definitely improved – I was getting a lot of stuttering on the old system. Often the FX6300 would be maxed out in games such as Battlefield 1 and Rainbow Six: Siege. Not the case anymore – it feels like I upgraded the GPU too! The improvement in smoothness and minimum frame rates is unbelievable. I will write a more detailed post about gaming performance soon – a hot topic when it comes to Ryzen!

You may be wondering about the CoolerMaster Hyper 212 Plus I am still using. I have the fan profiles set to run the fan at minimum speed at idle (about 700RPM), and that sits the CPU at about 35 degC. This is using the Windows “High Performance” power profile.

You can also see the sensor that has the 20 degC offset applied to it – the “Package” temperature. It is that sensor that AMD uses for XFR (which works nicely – you can see the 3.9 GHz maximum boost). No, I am not running 1.55V through the CPU – that bit is wrong. Nor is it 2.5V. Some of these readings are all over the place at the moment, but this is just to show the idle temperatures.

Load (using the AIDA64 system stability test with CPU, FPU and cache stressing enabled):

So under full load we are looking at just under 60 degC for the “CPU” temperature. The “Package” (minus 20 degC) seems to be sat in the low 70s. Funnily the GPU temperature slowly drops as the fans ramp up (the Asus Strix would be in passive cooling mode!). We are looking at a fan speed of about 1300 RPM with the profile I have set up – not silent but not at all loud.

I have not tried to overclock the chip with the Hyper 212 – nor will I. I will save the overclocking until I get the AIO – the fan speed and temperature would go a bit silly if I did try it on the 212.

So I had best summarise the experience so far. In comparison to my old FX6300, it wipes the floor with it. It’s not the best value chip on the block – but I paid for performance, and that’s exactly what I got – and the price is not bad for the speeds you get. Intel, you had better be afraid right now. Bravo, AMD – you took your sweet time but it’s paid off in my book. 10/10.

Up next – the motherboard: the ASRock Fatal1ty X370 Gaming K4!