Much like a dream, the idea for this series of posts was so clear at the time. That time was yesterday, November 27th, at about 6:00PM EST whilst I was in the shower. Also like a dream, the clarity of the idea has faded somewhat in the intervening 19 hours.
However, I’m still here to write it because I feel compelled to. I’m at a point in my life where things are really quite different from what’s gone before in some ways, whilst remaining resolutely, defiantly the same in others. Perhaps I hope that this series of posts will provide me with some stream-of-consciousness-introspection, a kind of ‘Write Your Own Therapy’ exercise. Maybe I’m just a closet narcissist who wants to write more about himself.
Uncertainty is probably the most appropriate feeling as I go into this. As the title says, this is Without a Plan, subtitle TBD. I’ve lived much of my life without what I would consider any concrete plan. I’ve taken opportunities as they arise, certainly missed out on my share too, bounced around the United Kingdom and, latterly, the United States, and questioned myself more times than I care to admit. I’ve experienced truly wonderful moments, and plumbed the depths of depression.
“So what?” you might think, “That describes everybody.”
Perhaps it does, perhaps it doesn’t. One thing I’ve come to realize is that many of us are all broken in our own ways, but that some people really just do Have Their Shit Together in ways that I wish I could, but probably never will.
Anyway, let’s get into it, shall we?
I’m a 35 year old IT Manager, working for a software company in the North East of the United States of America.
I’ve been busy with work lately, but got some time this Sunday to work on the next part of my build – authentication.
The Unraid build itself is coming on well, but I now have 14 separate docker containers doing things for me, all with their own individual authentication methods. If I plan on opening up the server to external access (which I do), then I need something to manage usernames and passwords from a central point.
That something is LDAP.
LDAP stands for Lightweight Directory Access Protocol, and is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network.
The most common implementation of LDAP that people will probably have heard of is Microsoft’s Active Directory, itself an implementation of LDAP. It’s what I’m most familiar with, having worked with flavors of AD from 2003 onwards. It’s easy to setup and easy to work with, and is – in my view – the best implementation of LDAP for a heavily Windows-based environment.
I’m not exactly running a completely Windows environment. My primary machine, and that of my girlfriend, is running Windows 10. However, I have 14 docker containers (and growing), and some implementations I want to do that require some integration with whatever LDAP server you’re running, and I’m not sure how well AD would play with those.
Lastly, AD requires you to be running Windows Server, which requires a license, and also some fairly decent system requirements.
Plus, it’s fun to learn new things.
So I’ll be using something called FreeIPA (hopefully the punny title makes sense now) on a CentOS 8 install, with 2 vCPUs, 4GB RAM and a 60GB disk.
In my last post I commented about my USB situation, which I hoped would be quickly resolved with a USB expansion card. I picked up this model from Inateck; a 5-port USB A device which would allow me to connect my USB switcher for keyboard & webcam, a headset for gaming, as well as gamepads and wheels.
It arrived with customary Amazon quickness, and I added it to the machine. Slightly annoyingly it (and others) required a power connection from the PSU, as the PCIe slots can’t supply the 5V required. I was out of SATA connectors so this meant running a Molex cable through the case (urgh).
I brought the machine back online and found the new USB device sitting happily in its own IOMMU group, so I bound it to VFIO, rebooted the server, then passed it through to the Windows machine and started it up.
So it’s been six days since my last post, and after a busy and at times frustrating week (work-wise, nothing to do with Blackjack) I have some more updates.
First, good news. The Plex migration worked flawlessly as I mentioned in the last post. We’ve been running it for 6 days now and have watched a bunch of stuff on it without any issue whatsoever. This is what should have happened but I’m still pleased.
As you can see, I’m also penning this on my Windows 10 VM, using dual screens. The performance is excellent – it’s faster at booting than the bare metal install on my old machine!
I’ve now shut down my old machine, physically replacing it with Blackjack and swapping the rest of the memory. We’re now running on 64GB total, with 24GB reserved for the Windows 10 machine. It was pretty happy with 8 and I’m sure would be happy with 16, but if I have a surplus why not use it? So far the containers I have running aren’t taxing the system much at all, but I have further plans which may drive that usage higher.
There have been a few things that haven’t quite worked as well as I’d hoped though.
So, my data is all moved from t’old machine t’new one (for any Americans, you’ll need to read that sentence in a strong Yorkshire accent. Good luck.)
That could be that, but losing all of the ‘watched/unwatched’ and progress through series would be a bit of a pain in the arse, so I’m trying to migrate the metadata of my now-old Plex install (Razorback) to the new one (Blackjack).
On Windows, Plex stores everything in C:\Users\username\AppData\Local\Plex Media Server.
In Docker, that data is located at /mnt/cache/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/
Plex’s FAQ does include information on moving Plex data around, but it’s a far cry from what you really need to know in a scenario like this. That is fair, as there are a large number of potential scenarios and configurations that it would be unfair to expect Plex to constantly stay on top of and document adequately – after all if it doesn’t work, people would come crying to Plex and they’d have to support that or risk the wrath of Unhappy Internet People.
I’ll make this a long story short – I’m going with the basic bitch method of just copying the (several hundred thousand) files across the network from my Windows machine. I tried zipping the whole lot up and then unzipping it on the host, but with various combinations of commands I always got the same error: caution: filename not matched, which didn’t make sense then, and still doesn’t now. I tried a number of different solutions from researching online but decided quite quickly that this is one of those annoying Linux things that I know I’ll spin my wheels on for an hour or so, and eventually just have to do it the basic way anyway.
So, I skipped ahead.
At the very least I am grateful that Plex have kept the folder structure and mechanisms broadly identical across different platforms. I’ve certainly dealt with software in my time where a Windows and a Linux version of an app were entirely incompatible and there was no hope of moving settings from one to the other, so this is a refreshing change from previous experiences.
Of course now we’ll have to see if this actually works or not. I have … middling hopes of success, but we’ll see.
A few hours later …
So the metadata is all copied over. I started the Plex Docker and immediately went into the server settings and edited my libraries; the existing libraries pointed at the old media locations, which was good. I added the new locations and let Plex scan them.
And … it worked! My On Deck still shows a half-watched episode of Brooklyn Nine-Nine, and my watched / unwatched lists are all there.
So I’ll say up front, it’s possible that I haven’t set my storage up in the optimum way, and that choosing ‘just’ a 500GB cache drive has caused me some small issues, but I think that in daily operation, things should be fine.
My biggest challenge with the transition to the new machine was always going to be moving the data from old to new, whilst keeping the old one running and serving media. As it transpired, we’ve had some internet issues over the last week which has meant the Plex server has been inaccessible to the outside world most of the time anyway, but I had already hatched a plan and that was what I stuck to.
I had 4x6TB (not 4x4TB as I said in my first post) in my Windows machine configured in Windows Storage Spaces. Due to the way it was configured, I could only remove one of the disks, despite having just over 1 disk-worth of data. Therefore I’d need to move everything before I could destroy the array on my Windows machine and move the other disks.
The cache drive was a savior here, both in terms of storage and speed.
As I mentioned in the last post, SpaceInvaderOne is a brilliant resource for UnRAID – and a bunch of other things besides. It also doesn’t hurt that he’s a Brit based out of one of my favorite cities in the UK, Bristol.
I’ve mentioned a bit about parity in these posts and, if you’re wondering how it works, he has a brilliant explainer here.
He also talks in a subsequent video about key plugins to use with UnRAID. I was expecting a list of plugins, which is what I got, but also something even more impressive and much better than the disparate group of different plugins with different install techniques and documentation that I was expecting …
Community Applications is the plugin you must install with UnRAID, because it makes everything else so damn easy. I’m not kidding – once it’s installed, this is your one-stop-shop for searching for plugins and one-click installing them. Most plugins I’ve found will also link you directly to the UnRAID Community Forum thread for that plugin should you have questions or just want to find out more.
It’s easy and brilliant and exactly what it should be. SpaceInvaderOne had some particular recommendations which I followed because after all, he’s the expert.
The first one to install is Fix Common Problems, which does exactly what it says on the tin. It scans your system, and tells you if things are configured incorrectly, not configured at all and should be, or anything else that means your system could potentially not be running at its best.
Next up is the Dynamix series of plugins. These do everything from allowing you to schedule a cronjob to do regular SSD TRIM operations on any SSDs you have installed on the system that support them, to helpful visualization tools to show easy disk usage and system temperature information. There are a bunch more that I need to explore, but they really seem to have thought of a ton of use cases and developed for them.
It’s Getting Hot in Here
One of the things I really wanted for this machine was for it to be quiet. No shit right? Half the components I ordered are literally from a company with that name. My biggest concern was balancing temperature and noise – it’s relatively easy to keep a machine cool if you blast air through it at high speed, but that comes with a lot of noise. Equally it’s easy to keep a machine quiet – lots of large fans run at slow speed – but that tends to let things get hot.
It probably didn’t help that we had an unseasonably warm week this week, and that I was adding a third machine to a room that already had two in it, but I’ve been seeing temperatures that are a little higher than I would really like, thanks to the Dynamix System Temperature plugin.
My existing machine runs at mid-30s at idle, and I stress tested it up to 85 degrees which is well within tolerances for the hardware – but that’s in a larger case with better ventilation at the front, and fewer cores.
I cranked my fan controller on the case from ‘Silent’ to ‘Performance’ but it honestly didn’t make much difference – however the BIOS is also set to Silent, so I may need to reboot and play around with those settings to crank the fans up a bit without making it too loud.
There’s also a reasonable chance that I screwed up the heatsink somehow, with the mounting issues I mentioned earlier. Either way, once I have the machine fully up and running and I’m ready to move the rest of the disks over, I need to do a lot of cable management to get the machine into its final state, so I can remount the heatsink if I need to, and move some fans about.
The main barrier to swapping machines is replacing the Plex server running on Windows with the new Blackjack hosted data. As mentioned in my first UnRAID post, I had intended to run Plex on Ubuntu, but I changed my mind and went for a straight-up container. This meant that installing Plex was simpler than its ever been. I setup a couple of shares for the media, pointed Plex at them, and it was ready to roll.
Obviously there’s nothing in it yet, I still need to migrate all the media over (a process ongoing as I write). The last thing then is to try and migrate the database, keeping all of the ‘watched/unwatched’ tallies for me and the other users. Once that’s done and confirmed, I can delete the data from the Windows machine and relocate the disks.
Remote access to this box is going to be important. I’ve used No-IP for years for keeping my domain name linked to the IP of wherever my machines are located. Usually this was an app installed on my machine but now I’m in the world of containers my first question is ‘Is there a container for that?’
The answer is yes, so I’ve now offloaded one more thing to the main system that I don’t have to worry about a guest OS doing.
A couple of things I forgot to mention in the last post.
Firstly on the built-in fans; the two fans were identical, but held in with very different screws. The rear fan had what I would consider to be ‘regular PC case screws’, but the front fan was held in with odd small stubby screws which, when removed, had a strange sticky gasket attached to them which sort of broke away as I removed them.
Perhaps typical purchasers of these cases don’t remove the existing case fans and just add to them but … I found it an odd difference, and a disappointing lack of quality on the front screws.
Lastly, the ‘cable management’ around the back of the motherboard tray started out well, but started to become problematic. The case panel is lined with a foam insert, which is great for deadening vibrations and thus noise, but it means there’s not a lot of space in there. My goal was to keep the motherboard side of the case clean and clear, but I may need to let more cabling into the body of the case in order to not have everything so smooshed up behind it.
Anyway, it was another day before I could get the machine connected up to a monitor and to begin working on it. I booted into the BIOS/UEFI setup first to tweak things and see what I was dealing with.
The ASRock Z460 Taichi has what I’d call a ‘typical’ UEFI setup screen – graphics that (to me) hark back to 90s Japan, but it was functional and let me get to what I need. I went through all the settings, making sure to enable the virtualization features, as well as turning on the IOMMU passthrough features I’d need later.
I probably spent most time fiddling with the motherboards’ built-in LEDs. They do all sorts of things, but I just wanted a static white light. I’ve yet to see if I’ll be able to install software on my Windows VM to manage that further – possibly turning it off at night automatically – but for now it’s fine.
That’s right, the God Box has a name. My previous naming conventions followed The Expanse series of books (and now TV), but here I’ve swapped one form of plagiarism for another.
When discussing the idea of this build with one of my colleagues, he suggested a color scheme of white-on-black, which I liked the sound of and subsequently stole. When I was a child, I remembered these sweets called Blackjacks – white and black chewy candy – and Fruit Salads. The black & white color scheme made ‘Blackjack’ a fitting choice.
I’ll probably call the Ubuntu installation Fruit Salad (that color scheme is a bit more of a stretch …) and I still need something fitting for Windows but .. whatever, we’re getting off topic.
Let’s talk hardware.
An Intel Core i9-10850k sits on an ASRock Z490 Taichi motherboard. Not pictured is the 64GB of G-Skill DDR4 2133Mhz RAM which was (at the time of picturing) installed in my existing machine. That CPU is cooled by the be quiet! Shadow Rock 3 CPU cooler (center). Flanking that in the image is five white be quiet! Shadow Wings 2 140mm case fans. To the left in front of the fans/motherboard is both a 500GB and 1TB Samsung Evo 970 NVMe SSD, for use as a cache drive and VM file drive respectively. To the far right is the Republican Party a Corsair RM750x power supply in white (with white braided cables) and atop of that is a Zotac Gaming GeForce 2060 RTX graphics card. On top of that is a Unifi 16 port POE switch. At the very bottom of the picture are white SATA cables, white SAS cables for the SAS controller card which will eventually be transplanted along with the other disks from my main machine, and an 8TB WD Red NAS disk for parity.
Finally, the whole lot is ensconced in a be quiet! Dark Base 700 ATX tower, with additional drive bays purchased alongside it.
I have been out of the consumer hardware market for a long time. Despite building a ‘gaming’ PC (that doesn’t see much gaming) back in 2018, I’ve been out of the game for the past decade or so. In the early-to-mid-2000s my bedroom was a sea of tech; at one point I was running my own mail server (over a tiny ADSL connection), had two case-modded machines and a wall of monitors (CRT, natch) that would have made The Architect jealous.
Since then, I’ve been “on the road”. When I first moved into a shared house, there wasn’t room for a desk, so I had to shrink my tech down to something mobile – a Dell XPS 13. They continue to be an excellent laptop line all these years later, though sadly mine destroyed itself by boiling its integrated graphics card.
It got replaced with a MacBook Pro, thus beginning a 11+ year love of all things Apple. Windows 8 was released a few years later, cementing my stay in Apple-land. When I did get a desk back, it was an iMac I bought, not a Windows tower. That, once more, had to make way for mobile computing when I found myself on the road again, moving between flats and houses and jobs.
Another big part of the Wintel apathy was down to my job(s). Back in the day when I had an array of machines and the digital world at my fingertips, IT as a job was something I aspired to. Once I got into it, I found that spending 8 hours a day working on computers is a really great way to sap your enthusiasm for doing more of it once you get home.
When I finally ‘settled’ in New York (for two years) I finally got around to getting a Windows machine again, ostensibly for gaming. My girlfriend is an avid gamer, and I wanted us to share that hobby a little. Well, she probably did more than me – like I said, after 8 hours at a desk fixing problems, the last thing I want to do is come home and sit at a desk for another 4. I quickly realized that I was well out of touch with modern components, what manufacturers were best, and so on. I think in the end I just found a few system guides that roughly correlated to my budget, and whipped the thing together.
All in all, it’s been a very solid, reliable machine. An Intel Core i7-8900K with 32 (now 64) GB of RAM, an NVMe boot drive (that was a nice change from my last Windows machine!) and some storage, and things were great.
I was also running with a Synology NAS for my Plex media library. It was a little 2-disk 4TB mirrored job, which worked ‘just fine’, but I found that I had to offload the actual media transcoding (for remote streams) to my PC. Once that filled up, rather than getting a new NAS I decided just to move all the disks and data to my main PC and have that do everything. A new case was sourced (a Lian-Li PC-A75), the hardware transplanted, and I also took the opportunity to Frankenstein some other hardware together in my old case to replace my girlfriend’s PC which was on its last legs. The only thing that came over from the old machine was the graphics card, which is now slowly dying – more of that anon.
This has all been working fine, but it was a pretty quick & dirty switch over that I hadn’t spent much time thinking about or planning.
Fast forward a few months, and I hired a young guy at work who was also a Plex aficionado and was in the middle of rebuilding his own – rack mounted – Plex server, along with a host of other additions. My interest was piqued, and he told me more about his system. He had rebuilt it using Docker containers on Linux, and had a fully interactive web frontend to access media, as well as requesting it, auto-downloading and categorizing it, and more besides.
This was something I hadn’t even considered – again, my day job kept me plenty busy and I wasn’t exactly looking for more things to keep me in front of a screen. However, this information had awoken something in me – an urge to tinker with personal tech that had been lurking at the back of my mind, long dormant.
Around this time, the graphics card on my girlfriend’s PC started doing more weird stuff. Photoshop would randomly complain that it couldn’t initialize the graphics driver and deny access to a bunch of features. One screen would stop drawing certain items in Chrome. It had been on its way out for a while, but I decided it was time to do something about it, so started looking for a good replacement.
Somehow, Googling for a graphics card replacement turned into a whole system replacement. That then became researching individual components, and suddenly I was figuring out how to split my systems in two.
With no end to the Covid pandemic in sight, the winter drawing closer, along with its cousins Seasonal Depression and It’s Too F-ing Cold To Go Anywhere, I decided now was a good time to rebuild my tech and get stuck into a project or three.
Plan A was similar to my 2018 build – sticking with my conventional, out-of-the-game-for-years knowledge. I would build a small form factor machine into a small case that could fit a bunch of disks. Something small but good looking, maybe that I could put a novel cooling solution inside, to house my media, and what would become my projects for the season – docker containers for stuff like Radarr, Sonarr, Emby, and so on.
The other machine would be a rehoming of my existing machine, again with some interesting new cooling – perhaps a trip back to Water-cooling Town, a place I had visited first at University many moons ago. It was a mostly painful journey that ended up with a destroyed graphics card, but things had moved on in the last ~16 years.
That was when I stumbled upon UnRAID, VFIO, IOMMU groups, and other wondrous things. I discovered that now, people were running some form of Linux on bare metal, then virtualizing a bunch of machines inside of it, and directly passing through pieces of the hardware to those operating systems. People are now literally building multiple gaming PCs inside of one physical box.
The concept of virtualizing my primary OS on a device had never even occurred to me. I’m familiar with virtualization in the work place – most of my core infrastructure runs on Hyper-V, and without VMWare vSphere we could never do the development work we do without incurring crippling hardware costs. Heck, I have a virtual copy of macOS running behind this browser window right now.
But virtualizing the primary OS? With graphics and input/output that operate like a bare metal install on consumer hardware? It’s something I’d never considered, but now seems obvious. Hardware has come on such a long way since I last built PCs with any regularity, and for the last 10 years my “IT brain” has been firmly locked into the corporate world.
In the case of UnRAID, the software is actually running from a USB device (in my case it will be a 16GB SanDisk Ultra Fit), with the application loaded into memory. This leaves all of the bare metal hardware free to be assigned at will (well, mostly …) to whatever you want to run on it. And with UnRAID, you can run virtual machines and docker containers within it, whilst UnRAID itself manages your storage pools for you.
The whole thing sounds like a really fun project. As I mentioned, I’ve ‘done virtualizing’ for years now, but in the corporate world. Slapping Hyper-V on a rack mounted server and installing a ton of single-role server operating systems isn’t fun, it’s procedure.
So what’s intended for this Project Box (which I’m casually calling the God Box), to while away the winter hours? There are a few things I needed to make sure I could stick to:
A ‘daily driver’ machine for standard web browsing, life admin, occasional photo editing, that sort of thing.
A gaming machine, for the odd occasion I feel like doing that. I’ve just been gifted Red Dead Redemption, so that’s something to get my teeth into, and I love playing Civ as well.
A Plex server for media.
So, we’ll be running UnRAID as the ‘core OS’ of the device, from USB. The machine will have 32GB of RAM (to start with), stolen from my existing machine. There will be a 1TB NVMe SSD to host ‘key’ virtual machines – in this case Windows 10 and Ubuntu Linux. A second 500GB NVMe SSD will act as a cache drive for the system’s main storage.
Two 1TB Western Digital Blue SSDs will provide a striped array which will form the data partition for the Windows 10 machine. That’s based on current usage – basically it’s a partition for my documents and a local copy of my iTunes library (the idea is if the internet ever goes out for a long time, at least I can still play music).
The Ubuntu Linux installation will run Plex. Whilst I could run Plex in its own native container, I believe the performance is better this way. I can pass the Intel Integrated GPU through to the Linux VM, and perform hardware transcoding on it for pretty good performance. Originally I was intending to purchase a second graphics card for this, but I don’t think it’s necessary anymore.
For storage of the media, I’m adding an 8TB disk for parity. Data will be stored on the existing 4TB disks I have in my existing machine, but the parity drive will maintain data integrity, as well as allowing future expansion with larger disks The parity drive must be equal to or larger than the largest disk in your array, and anything larger than 8TB gets expensive; as media grows I could slowly replace the 4TB disks with 8s and still have a ton of storage before needing to consider anything else.
I’ve already begun prepping migration, removing one 4TB disk from the Windows Storage Spaces array on my machine. That will form the beginning of the array on the God Box, with the parity drive building its index as data is migrated to it, which is more efficient (and safer) than migrating all the data first and then building parity. I have about 6TB of media, so I can migrate most to the new machine, remove another from the array and put it in the new machine before finishing the transfer, then moving the remaining disks.
All of this will be housed in a new enclosure – a Dark Base 700 from be quiet!, using fans and cooler from the same company. Other than Noctua these are some of the quietest fans on the market, and they look pretty cool too.
Having shamelessly stolen the idea from my colleague, I’m going with a white-on-black theme, with white components inside the case (where I can get them) and white cabling. be quiet! supply white versions of their fans, and a white cooling block for the CPU which should finish the look off quite nicely.
Lastly, horsepower. I was very tempted to go with an AMD Ryzen build, as they seem to provide more bang-for-buck. However I am wedded to Plex as a media player, and AMD support is poor with that product, so I’m sticking to Intel. A Core i9-10850K gives me 10 cores to play with, and an ASRock Z490 Taichi motherboard seemed to be a good pick to tie it all together.
be quiet! offer a ‘build your character’ page for their cases, so this is roughly how it should come out looking (albeit with more white highlights).
The components start arriving this weekend! The initial build will take some time (getting the cable management right is going to be a big consideration for this build), and I think fettling UnRAID to get the basics working is going to take some time.
However, once the base machine is up and running I look forward to migrating the data, then getting started on expanding the setup! There will be a lot to come, and I’m looking forward to sharing this project as it progresses.