While we do have a formula that we use for designing and building our servers, each server is custom-made for each of our clients' needs. Here are some principles that we've created or subscribed to when designing the hardware and software setup of our servers:

Drive Layout

Right now, storage is at an interesting point. It used to be there were hard drives, and, well, other hard drives. But today, we have solid-state drives. These drives are very fast, which is great for running operating systems and other applications, but does nothing for a file server (because the network, not the storage, is the bottleneck). They're also very expensive. So if you have a large quantity of data that's just going to sit there until you access it, that's not a good use of money. At some point in the future, SSDs may reach price parity with HDDs. But for now, we need to find the best mix of performance and economy.

This is why we use a mix of SSDs and HDDs in most of our servers. For instance, our standard file server will include an SSD array for the OS and applications, and an HDD array for file data storage. An application server, on the other hand, which stores a lot less data but needs to access and process it very quickly, might contain only the SSD array.


It's been many years since we've recommended building a server on top of "bare metal" -- That is, loading the server's operating system directly on to the server. And except for niche cases, we virtualize all of our servers. We do this by creating a base installation of Linux, then installing KVM (Kernel Virtual Machine) on top of that. Then from there, we can create the Linux or Windows server(s) for use on the network.

Why virtualize? Virtualization abstracts the hardware from the server, which gives us a couple of advantages:


Redundancy comes in many forms, and it's all about maintaining the highest levels of uptime possible, but within budget! Some businesses rely very little on real-time access to the network, and could afford to be down for an hour every few months. Others are much more reliant on full uptime where a few minutes of down time per year can be very costly. There are solutions available to fit all of these needs, and the selected solution should fit the need. It's costly to pay for too much guaranteed uptime if it's not needed. But it's also costly to not have it in place if it is needed.


At a very minimum, all of our servers use RAID (Redundant Array of Independent Disks) to store data and run the OS. RAID comes in a variety of levels. Some of the most common ones are:

RAID arrays have been around for a long time and are a solid technology. One of their downsides is upgradability. With RAID 5 and 6, you can upgrade by adding more drives. But you don't benefit by adding larger drives. For instance, let's say you have a RAID 5 array with 3 1TB drives. If you add a 2TB drive, half of that new drive will go to waste. If you want to use larger drives, you have to replace all drives in the array, and this is very time-consuming.

Additionally, it should be noted that RAID IS NOT BACKUP! There are many ways to incur data loss, such as drive failure, software infection, data corruption, theft, and user error. RAID protects you only from drive failure. RAID should not be counted as a backup method, and it should not take the place of proper data backups. RAID is intended as a downtime prevention method. With a single drive, if that drive fails, you are down until the system can be rebuilt and restored from backup. With a RAID setup, a drive failure will not usually have that impact, and the failed drive can be scheduled for replacement outside of normal business hours.

Distributed Filesystems

A distributed filesystem is one in where a grouping of drives spanned over mulitple servers appears logically as one large filesystem where data can be stored. Our distributed filesystem of choice is LizardFS, because:

LizardFS allows us to distribute data over as many servers as we want, with each server having an uneven number of drives of all different sizes. Literally, it adds all of the drives together, and that's how much space is available. For protection, we can specify a "goal", which is a count of the number of "chunks" we want any given file or directory to have. This creates the redundancy. In a properly functioning LizardFS setup, you can unplug the power cable from one of the servers, and nothing goes down.

For applications where preventing downtime is a primary concern, a distributed filesystem combined with server virtualization is the ultimate.

Windows or Linux

We're big fans of using the right tool for the job, whether that's Windows or Linux. Someone looking at us from the outside might say that we're slanted towards Linux. But it's not our fault that Linux just so happens to usually be the right tool for the job. Linux has many advantages over Windows:

There are some cases where Windows is required. For instance, some applications are Windows-only, and require a Windows server in order to run properly. In this case, we can use virtualization to create a Windows server specifically for that purpose, while relegating all non-Windows-only tasks to one or more Linux servers.

Battery Backup

All of your critical back-office equipment should be on battery backup, and servers are no exception. The big issue is not powering your server for hours when the power goes out for an extended period of time, but rather preventing your server's operation from being interrupted because the power flickered out for half a second. This is enough time to turn your server off as if someone had pushed the power button, which means you're not just down for that half a second, but for the amount of time it takes to bring the server back up. And shutting down a system uncleanly like this can sometimes mean that starting back up is not a simplistic process.

A proper battery backup solution will:

Data Backup

We recommend and implement the 3-2-1 Backup Strategy. This means:

Using this strategy, it would be almost impossible to suffer a major data loss. Given that 60% of companies shut down within 6 months of a catastrophic data loss, this is important! Here's how we implement this for you.

First, we set up an onsite backup. This is usually an external drive connected to your server. It is scheduled to run a backup every few hours so that it runs several times per day. Second, we configure an offsite backup, which runs overnight daily and copies all of your data over an encrypted connection to an offsite server. Using this setup, your network then satisfies the 3-2-1 Backup Strategy: