NVMe hardware raid now available at XLHost

Posted by Drew Weaver on Thu, Jul 22, 2021 @ 01:52 PM

PCIe NVMe SSDs have been available for enterprise/datacenter deployments for several years. With the performance of the typical NVMe SSD being ten to fifteen times greater than the average SATA SSD it has been very tempting to deploy these drives for applications that require a lot of IOPs, storage throughput, or both. Up until now using NVMe drives in servers has come with compromises related to resiliency and overall performance. While the drives themselves have redundant NAND flash memory, the drives could still fail in a number of ways which could impact the availability of applications.

There were two primary solutions to these resiliency issues. The first solution was to scale out the application to multiple servers. Clustering servers is a great idea (especially if you are using virtualization) for performance and availability reasons. The biggest downsides to clustering are the costs associated with duplicating servers, storage, networking, and software licenses etc. 

The second solution was to use software RAID. Software RAID (built into modern operating systems) uses the server's CPU to calculate parity and prevent a disaster in the event of disk failure. Depending on what CPU/platform you are using as well as the volume of IO we have observed as much as a 60% increase in CPU utilization just for calculating parity (meaning the IO operations required to carry out the duplication of data) So in an extreme case your application would only be able to use 40% of the CPU. 

Today we are excited to announce our first ever server (built on the Dell PowerEdge R750) that offers hardware RAID controllers for NVMe SSDs. I mention controllers (plural) specifically because our first offering in this space actually has TWO Dell PERC H755N (Broadcom RAID-on-chip, SAS3916) controllers. Each controller connects to 8 2.5" (U2) ports on the front of the chassis. The reason we used two controllers in this server is because the drives are so fast that 8 of them can overwhelm the available bandwidth of PCIe Gen4. So spreading the drives across two controllers allows our customers to use 16 NVMe SSDs.

The downside of course is the increased cost of the drives (and the controllers) but we believe that the 15x performance benefits completely outweigh the cost and creates opportunities to further consolidate server fleets. Pricing on NAND flash continues to fall and the prices of drives will fall with them. Also SSD manufacturers will pivot completely away from SATA over the next few years. Another advantage of hardware RAID (and the ubiquity of Dell DRAC/OMSA management) means painless integration into your deployment pipelines and monitoring systems.

XLHost plans to announce multiple offerings using NVMe hardware RAID throughout 2021. These solutions will all offer different capabilities at different price points.

If there is a specific build that would work better for your business please reach out to XLHost




Tags: ssd, storage, dedicated servers

Choosing the right dedicated server storage

Posted by Drew Weaver on Tue, May 21, 2013 @ 09:26 AM

Hi again, this article is part two in the series which began with Choosing the right dedicated server platform. Here we will discuss various storage related issues from hard drives to RAID levels and more. The goal at XLHost is to help you choose dedicated servers that will provide you with the best price to performance ratio for your application(s).

Supermicro SSG 6047R E1R72L rearA 72x 3.5" disk server

Why is this important?

Data is the lifeblood of modern Internet applications. In the heiarchy of application performance the storage you choose is second only to the performance of your network connectivity. After all, You can host your application on a dedicated server with a 20TB array of SSD disks but if it is connected at 10Mbps the application is not going to scale as large as you want it to. Fortunately, XLHost has built one of the best hosting networks in the world to make sure that is not an issue.

In a connected world where everyone wants everything yesterday, the speed at which your application can display data to your users and collect data from your users can make or break your application.

Mechanical Storage - Spinning rust

With modern mechanical enterprise hard drives the primary metric that determines the performance of the drive is going to be it's rotation speed. Enterprise SATA drives (except for the Western Digital Velociraptor which is 10,000RPM) are 7,200RPM. In comparison SAS drives are either 7,200RPM (Near Line/NL SAS), 10,000RPM, or 15,000RPM. There are other metrics such as the amount of cache the drive has but as hard drives have become commoditized the performance of all drives with similar specifications usually fall within the same range.

Like performance, mechanical enterprise hard drives have varying degrees of expected reliability (I say expected, because anything that spins at 7,200 - 15,000RPM is going to have some defect/margin of error). Manufacturers publish MTBF (mean time between failure) numbers for different drives, a few of them are in the table below.

Drive Interface RPM MTBF
WD RE4 SATA 7200 1.2 Million Hours
HGST C10K1200 SAS 10000 2 Million Hours
Seagate 15K.7 SAS 15000 1.6 Million Hours

A note about massive capacity SATA drive reliability

Western Digital RE4

Mechanical storage vendors achieve massive storage capacities by stacking as many platters as they can inside of the 3.5" body of a hard drive. Each additional platter that is added to a drive increases the potential for something to go wrong. Each time the maximum size of a SATA drive has gone up it has been achieved either through adding more platters to a drive or by making advances in the density of the platter.

For example a 4TB 3.5" SATA drive might today have 5x800GB platters. In the future it might have 4x1000GB platters (and so forth). As the platter densities grow the drives will become more reliable. As always, XLHost recommends RAID but especially with the largest capacity drives.

Solid State Disks - A new hope

small sc3700 1(The DC S3700)

The most exciting area of advancement in application performance in the past 3 years has come from solid state disks. A solid state disk is essentially an array of flash memory and a controller attached to a board. The manufacturer then installs the board into a 2.5" drive casing. SSD drive performance is approaching the point where the 6Gbps SATA interface (common on most motherboards, backplanes, and RAID controllers) is being saturated. In Choosing the right SSDs for your dedicated servers we went through the primary differences in performance and reliability between mechanical disks and SSDs but I will summarize here.

Drive Type Sequential Read (MB/sec) Cost per GB MTBF (million hours)
Enterprise 7200 RPM SATA 152  $.12 2
Enterprise 15000 RPM SAS 198  $.45 1.6
6Gbps Intel SSD 520 550  $1.07 1.2
6Gbps Intel SSD S3700 500  $2.35 2.0

You can see in the table above that the performance of SSDs are more than double that of the fastest mechanical drives on the market today. Although currently there is a wide delta between the cost per GB between mechanical storage and SSDs the gap is closing quickly and the performance benefits of SSD vs. mechanical disks cannot be over-stated.

All flash memory has a finite amount of times that it can be written and erased, because of this there is a point where an SSD drive will become unusable but by distributing the write cycles across all of the flash in the drive the lifetime of an SSD is extended. Also many SSD drives have spare flash modules built into them.

The Intel DC S3700 has an entire spare array of flash and is the first SSD to warranty based on read/write cycles and validate performance consistency making it one of our favorite new products for 2013 at XLHost. 


RAID arrays serve two basic purposes (depending on the RAID level).These benefits are mirroring/parity and capacity/performance striping. These benefits are combined in RAID levels such as 5, 6, and 10.

RAID achieves redundancy through parity and mirroring. The most basic example of mirroring is RAID-1 which effectively mirrors the data from a "primary" drive to a "secondary drive". The primary benefit of parity or mirroring is data protection but a secondary feature is continual service in the event of a single (RAID-1, RAID-5, RAID-10) or multiple (RAID-6) disk failure(s).

The downsides of parity/mirroring is some amount of disk space is not usable and depending on specific circumstances performance could be impacted.

Volume striping is basically two or more devices combined to form a single volume -- if you have two 1TB drives in RAID-0 a single 2TB volume will be exposed to your operating system. In addition to having both of the drives present as a single volume, the performance of the volume is multiplied by the number of disks in the volume (minus a slight penalty, depending on the type of RAID controller used.)

The downside of volume striping is that for each additional drive you add to the span the risk of data loss increases because if any drive in a RAID-0 set fails all of the data in the volume is lost. So if you are using RAID-0, XLHost highly recommends continuous data replication be used to backup your data. (This is a good idea no matter what disk system you're using).

We have already touched on RAID-1 (Mirroring) and RAID-0 (Spanning) above. We will briefly describe some of the more advanced RAID levels.


RAID-5 requires at least three drives and combines parity and spanning. In RAID-5 one drive is always reserved for parity. For example, if you have three 500GB drives in a RAID-5 array the usable volume size will be 1000GB and one drive can fail at any time without an impact to your dedicated server. RAID-5 offers increased read performance over RAID-1 and better write performance over RAID-6.


RAID-6 requires at least four drives and two drives are always reserved for parity. For example, if you have four 500GB drives in a RAID-6 array the usable volume size will be 1000GB and up to two drives can fail at any time without an impact to your dedicated server. RAID-6 is considered the lowest performance option due to double parity striped across all disks but it also offers higher reliability.


RAID-10 is two RAID-1 arrays that are striped together via RAID-0 and is seen as the highest performance option while still offering protection from disk failure. In RAID-10 a minimum of 4 disks are required and only half of the disk space is available due to mirroring. For example, if you have four 500GB drives in a RAID-10 array the usable volume size will be 1000GB. Unlike RAID-1 you can add multiple spans of disks to a RAID-10 array to increase both capacity and performance.

XLHost General recommendations for business critical data:

  • Only use RAID-0 or single disks in clustered filesystems (Ceph, Gluster) or specific instances where a software vendor recommends it.
  • SATA disks in RAID-1 are great for boot volumes
  • For low to medium IOPS/transactional requirements SATA/SAS RAID-10 is a great choice.
  • For high IOPS requirements SSD in RAID-1 or RAID-10 are great options.
  • Don't be afraid to mix SSD and SATA/SAS to achieve the right price to performance ratio. You can put your database on SSD and your OS on SATA to save money on storage.
  • Backup your data either using XLGuard or a 3rd party service (this cannot be stressed enough)
  • Consider future data growth, XLHost offers dedicated servers that take from 2 - 12 hard drives and custom servers that can utilize up to 72 3.5" drives.
  • Ask XLHost if you have ever have any questions

As you can see selecting the storage for dedicated servers can be complicated but XLHost's team of storage experts can make it easy for you.

Contact XLHost for a custom quote

Tags: ssd, storage, dedicated servers

From server to dedicated server service -- The facilities

Posted by Drew Weaver on Fri, May 03, 2013 @ 10:25 AM

It is a beautiful spring day at the XLHost campus in Columbus, OH so I will admit that I was looking for an excuse to get outside and enjoy it. One of the big questions we get from customers is what really goes into hosting dedicated servers? In this series of articles we will try to help you understand how XLHost takes a lowly pile of servers and turns it into a dedicated server service with a brief look (inside and out) at the XLHost datacenter in Columbus, OH.

Servers  To Dedicated Servers
 idle servers arrow
 dedicated servers


(The main gate, guard shack, concrete/steel security wall, yet another diesel generator, and would you look at that sky?)

Today we are looking at the amazing job that our facilities team does keeping the lights on (literally). In a future article we will explore the XLHost network and show off some of the high end technology XLHost uses to deliver your content to your users with amazing performance and industry leading reliability.


At XLHost, the term facilities describes the physical structure of the datacenters as well as the power and environmental systems. Basically everything you need to physically host a dedicated server is covered by our facilities team. We have invested millions of dollars into our datacenters since we began offering dedicated hosting way back in 2000.

racksnracksnracksOf course the foundation of any dedicated hosting service is going to be lots and lots of racks. XLHost does not disappoint in this area!



Two megawatt diesel generators (because N+1 is a lot more fun!)


It takes a whole lot of cold air to keep thousands of dedicated servers happy. (Don't miss the AC hiding on the roof in the background)


Cold Aisle containment (keeps the hot side hot and the cold side cold)


One small part of our amazing room full of batteries (impossible to do it justice in a photo!)


The finished product

By now we hope that you can see that a lot more than a server goes into the dedicated server service that XLHost offers. XLHost provides the best value in dedicated hosting and we do it while maintaing world class facilities with a stellar 13 year track record of reliability and quality.

Please let us know if you have any questions about our amazing facilities!


Tags: hosting, dedicated server, dedicated servers

Choosing the right dedicated server platform

Posted by Drew Weaver on Thu, Apr 25, 2013 @ 08:06 AM

If you have already decided that you are going to host your application on a dedicated server, the next question you will want to answer is which dedicated server is right for you. In this article we will try to make it easier to understand some of the differences between the various server platforms XLHost uses. This article is the first in a multi-part series where we will cover all aspects of dedicated server hardware.


One of the most important things you will want to consider when purchasing a dedicated server from any provider is the platform that the server is built upon. In the screenshot below of XLHost.com you can see that we list the platform for every dedicated server we offer. The download link will even download the PDF from the vendor that has all of the specifications.


The platform is important because (among other things) it determines the following:

  • The count and type of CPUs
  • The maximum quantity, form factor, and type of hard drives
  • The maximum RAM a server can use
  • The quantity of NICs in the server

In the screenshot above those five dedicated server packages represent three similar, yet entirely different server platforms, lets take a look to see just how different they are.

Platform CPU Count/Type Max RAM Disk size/count RAID Generation
X9SCL+-F 1/Intel LGA1155  32GB  3.5"/3 N/A current
PowerEdge R310 1/Intel LGA1156  32GB  3.5"/4 0,1 previous
PowerEdge R320 1/Intel LGA1356  375GB  3.5"/4 0,1,5,10 current

There are a few things you should notice here. First, although these are all single CPU server systems they each have a different socket. Second, both the X9SCL+-F(Ivy Bridge) and the PowerEdge R320 (Sandy Bridge-EN) are both "current" generation products. Third, would be the maximum RAM XLHost can install into a dedicated server using these platforms.

Since they are the closest direct comparison, we will focus the rest of this article explaining the differences between the X9SCL+-F and the PowerEdge R320 based dedicated servers.

Wimpy cores vs. Brawny cores

There will be an entire article dedicated to choosing the right CPU for your dedicated server but I wanted to explain a little bit about the differences in the CPUs. The fastest CPU you can install into an X9SCL+-F is a E3-1290v2, this is a 4 core, 8 thread 3.7 GHz (14.8Ghz total) CPU. The fastest CPU you can install into a PowerEdge R320 is a E5-2470, this is an 8 core, 16 thread 2.3 GHz (18.4GHz total) CPU.

You might expect that because the E5-2470 has 8 cores, and the E3-1290v2 has 4 cores; that the total combined clock speed of all cores in the E5 would be double that of the E3 but there is only a 3.6GHz gap between the two CPUs. This is part of the ongoing debate about (More lower clocked) Wimpy cores vs. (less higher clocked) Brawny cores.

In some benchmarks the E3 1290v2 actually performs better than the E5-2470.

The bottom line when choosing a platform based on the CPU it supports is to know whether or not the application you will be running can use multiple CPU cores efficiently. If the application cannot, you would definitely want a higher clocked CPU.

Generation matters

Although both the X9SCL+-F and the PowerEdge R320 are based on current generation Intel chipsets, the X9SCL+-F uses Ivy Bridge while the R320 uses Sandy Bridge-EN. Ivy Bridge being a newer generation product will provide some performance advantage over the Sandy Bridge-EN part.

RAM is (almost) everything now

The primary reason that you would choose an R320 vs. a X9SCL+-F for your XLHost dedicated server is if you think you will need more than 32GB of RAM. Intel has decided to limit the maximum amount of RAM on all E3 parts (even the upcoming E3 12xxV3) to 32GB. It is fairly uncommon these days for XLHost to see a CPU running at 100%. It is highly, highly common for XLHost to see a server run out of RAM and start swapping to disk, grinding applications to a halt.

RAID/Disk choices

XLHost currently only offers hardware RAID on Dell platforms. If you require hardware RAID or more than 3 hard drives in your single CPU dedicated servers, then the R320 would be the right choice for you.

Server Manageability

Most Supermicro platforms can be purchased with or without their IPMI/BMC product which includes remote power control, KVM/IP+virtual media, and basic server monitoring capabilities (XLHost includes this as a zero cost option on Supermicro dedicated servers).

However one of the key benefits of Dell vs. Supermicro are the management tools such as OpenManage, OpenManage Essentials, and the Dell Remote Access Controller Enterprise. These tools make it a snap to manage a single server or hundreds of servers.


These are some good things to know about the application you wish to run on your new dedicated server(s):

  • Does the application use multiple cores/CPUs well?
  • What are the RAM requirements of the application?
  • What are the disk/IO requirements of the application?
  • What are the disk space requirements of the application?
  • Is the data stored on the dedicated server mission critical?

These are the important questions to ask of any hosting provider before selecting a dedicated server platform (feel free to copy and paste this when dealing with any sales departments):

  • What CPU options are there for this platform?
  • What is the Maximum amount of RAM I can install?
  • What disk form factor(s) (2.5"/3.5") can be installed in the server?
  • Disk types SATA+SAS, SATA only?
  • Disk controller 3Gbps/6Gbps?
  • How many disks can be installed in this dedicated server?
  • Does the dedicated server support hardware RAID?
  • Are the disks in the dedicated server hot swappable?
  • What are the available network connectivity options?
  • What are the platform management options?

One of the reasons we have customers come back to XLHost time and time again for their dedicated server needs is because they know that we are always happy to customize to the exact requirements of their business. We always want our customers to feel that they have gotten the exact servers they need at a fair price.

Contact XLHost for a custom quote

In our next article in this series we will talk about how to select the right hard drives and/or RAID levels for your dedicated servers. If you are anything like me, you can't wait for that!

As always please let us know if you have any questions by posting a comment!


Tags: hosting, dedicated server, dedicated servers

Dedicated Servers vs. Cloud Servers -- Which one is right for me?

Posted by Drew Weaver on Mon, Apr 15, 2013 @ 09:41 AM

dedivsclThere are many decisions which need to be considered when launching a new application onto the Internet. One of the most critical centers around what type of infrastructure technology to deploy. In this article we will explore and compare the features of Dedicated Servers vs. Cloud Servers and try to give You some insight into which product is right for you.


In 99% of cases a dedicated server will handle workloads faster than a cloud server with the same specifications. This is primarily due to the overhead introduced by the virtualization technology. However, with each new generation of server hardware and virtualization software the real world difference becomes more and more narrow.

Another reason that a dedicated server might outperform a cloud server is due to the shared CPU (in the physical server) and disk resources (either on the SAN, or local disks in the server itself). Most cloud offerings managed by service providers (including the XLHost cloud) are carefully monitored and maintained to ensure consistent performance. The other shared resource involved with cloud servers are the network connections. If you have throughput requirements higher than 2Gbps it would be recommended to use dedicated servers.

One thing to consider is that most of the time the physical servers, storage, and network connections that a cloud service provider uses for their public cloud offering have much higher specifications than an average dedicated server. This is due to the fact that the underlying server has to be able to handle anything that the cloud servers running on it can throw at it.

Winner: Dedicated servers

If you need 100% of the CPU, network and disk IO of a server 24x7x365 then dedicated servers are probably the right choice for you.


With dedicated servers, the two primary methods of scaling are hardware upgrades and migration. CPU, RAM and (sometimes) hard disk upgrades require at least a reboot. If you have a load balanced cluster and you can easily shut down a server without impacting your application then this might not matter to You. If your server already has the maximum amount of RAM, CPU, and physical hard drives it can utilize; then You would likely need to migrate to a new server. If You are running your applications inside of virtual machines this is usually a simple task, if not it can take a fair amount of work depending on how your operating system handles being moved from one platform to another.

With cloud servers, You have the ability to upgrade and downgrade your servers on the fly. In some cases depending on the Operating System running on the cloud server it may still require a reboot. Migrating your cloud server to a different physical server can usually be accomplished with no downtime via hot migration.

Winner: Cloud servers

If you have a cluster of dedicated servers or can withstand downtime associated with scaling dedicated servers are great. However the versatility of cloud servers puts them way ahead in this category.

Management and Provisioning

If done properly, the management and provisioning of Cloud Servers will usually have a much better user experience than that of dedicated servers. Cloud server operations are completely self-service and everything takes place in real time. The video below shows You how to create a cloud server in under 90 seconds using the XLHost Cloud. This can be done even faster using the API.

Winner: Cloud Servers

Dedicated servers have come a long way in terms of provisioning and management but the benefits of abstracting the hardware from the process makes this an easy choice.


If we assume that the dedicated server and the cloud server we are comparing would be running on the exact same hardware than the reliability should be equal. However in most cases building a high-availability solution is much easier and less expensive on a cloud server. Depending on the storage used cloud servers can be extremely fault tolerant.

Winner: Tie

Applications built on dedicated servers or cloud servers can have the same level of reliability if they are deployed appropriately.


Cloud server isolation, while not perfect has come a long way since the very first hypervisor products were released. Security researchers are finding fewer and much more difficult ways to circumvent the isolation techniques used in modern hypervisors. In general, for all applications exposed to the Internet (cloud servers, dedicated servers, VPS, shared hosting), the most popular attack vector is still going to be exploiting software vulnerabilities or brute force attacks because this is the lowest hanging fruit available.

Winner: Tie

There are much easier ways to compromise an application hosted on the Internet aside from attacking hypervisor isolation. The only time XLHost would recommend dedicated servers over cloud servers for security purposes is if Your company operates in a regulated industry which requires complete isolation.


When we examine value we are weighing the actual benefits of dedicated servers vs. cloud servers against how much each product costs. Dedicated servers are generally billed monthly and You pay a consistent amount each month no matter how much of the server you actually use. Many businesses find that it is much easier for them to budget with flat-rate monthly billed services.

With cloud servers on the other hand you are (usually) billed by the hour and pay only for the resources you consume. Cloud server resources are usually priced higher than the equivalent dedicated server resources but because you can dynamically scale a cloud server deployment vertically and horizontally there is a good chance that the TCO of a cloud solution will be lower.

Even though "The Cloud" has come a long way many applications are still not built for hyperscale environments.

Winner: Tie

If You have the ability to constantly manage the resources you are consuming to control costs then deploying your applications on a cloud server makes a lot of sense. If not, it could end up costing You more to deploy your applications on a cloud server for the same computing resources.


There is no simple answer for all workloads or all organizations. Although they will work well on the XLHost Cloud, applications with higher disk IO or network throughput requirements will most likely have a better TCO hosted on dedicated servers. Keep in mind that you can always run a hybrid deployment and host your front-end applications on cloud servers and the back-end databases on dedicated servers.

Like everything related to application deployment the key is to test and collect data to see what gives your users the best experience at the lowest possible cost.

As always please contact us if you have any questions about anything in this article, if you agree or disagree please let us know in the comments.



Tags: cloud, cloud services, dedicated servers

Maximizing delivery of emails sent from your dedicated servers

Posted by Drew Weaver on Sat, Apr 06, 2013 @ 11:36 AM

Thanks to the bombardment of spam that is levied towards business and consumer inboxes it is becoming increasingly difficult for legitimate marketing and transactional email messages to reach the hallowed inbox. XLHost gets questions frequently regarding email delivery and I want to share with you some best practices I have learned and free tools that you can use to ensure that the email you send from your dedicated servers gets delivered to the inbox.


The most important question to consider before sending any marketing email is: is what I am sending actually spam? Technically, spam is defined as any email that is unsolicited. Meaning that if you simply purchase a list of email addresses, You are most likely sending email to people who would rather not hear from you. We will assume for the rest of this article that you are sending email to a list of your own customers, or people/businesses whom have expressed interest in your products and services.

If you find that emails that you are sending are ending up in the spam folder of your recipients here are some things You can check.

Blacklists - Make sure that your dedicated server's IP addresses are not listed in Spamhaus or any DNS blacklists. There are several free tools and services available that you can use to check this. MXtoolbox is a very popular choice for this.

Reputation - Check the reputation of the IP address you are sending from with Senderbase. Many large commercial ISPs use information from senderbase to determine whether or not to accept your email.

Reverse DNS - Make sure that the IP address on your dedicated server which you are sending from has proper reverse DNS and that the reverse DNS matches. For example, if your server sends email as server.yourdomain.com make sure that the reverse DNS for the IP address is server.yourdomain.com.XLHost customers can modify their reverse DNS records in Grande.

matching forward/reverse dns

SPF and DKIM - Make sure you have published SPF and Domain Keys in the DNS records for your domain. SPF allows you to publish a list of IP addresses which should be trusted to send email for your domain. Domain Keys allow messages sent from your domain to be authenticated by the receiving server.

My favorite tool to check for email delivery issues has quickly become Mail Tester.


Mail Tester is a free, simple, and effective tool for simulating email delivery. When you visit mail-tester.com an email address appears in the box. You simply send the message you are planning to send to your users to the mail-tester.com email address. After you click the button mail-tester provides you with a score that shows you how likely your email will be to reach the inbox based on many of the factors I listed above.

If You are utilizing all of the methods and tools mentioned You should be on the road to email delivery nirvana in no time. If you are still having problems with email delivery please contact XLHost and we will see if we can help.


Tags: DNS, dedicated servers, email delivery

Choosing the right SSDs for your dedicated servers

Posted by Drew Weaver on Tue, Apr 02, 2013 @ 08:20 AM

SSDs have been generally available for mainstream enterprise and consumer use since about 2008. As the average price per GB of SSD rapidly declines and the performance, reliability, and endurance continues to increase; many enterprises are turning to SSD storage as a way of delivering the data their applications need to achieve outstanding performance.

SSD for dedicated server

The table below illustrates some of the differences between current storage technologies:

Drive Type Sequential Read (MB/sec) Cost per GB MTBF (million hours)
Enterprise 7200 RPM SATA 152  $.12 2
Enterprise 15000 RPM SAS 198  $.45 1.6
6Gbps Intel SSD 520 550  $1.07 1.2
6Gbps Intel SSD S3700 500  $2.35 2.0

As you can see from the table above the sequential read performance of SSD is more than double the performance of the fastest mechanical storage available today. There is still a steep price per GB penalty for using SSD vs. traditional hard drives but since this article is intended to help you choose an SSD lets look at the differences between the Intel SSD 520 and the Intel SSD S3700 in more detail.

Drive Sequential Read/Write (MB/sec) Random Read/Write (IOPs) Published Endurance
SSD 520 550/500 25000/40000 N/A
SSD S3700 500/200 75000/19000 10 writes/day for 5 years

If all we look at is the price and raw performance of these two drives, the SSD 520 would be a great choice for 90% of workloads. However, the primary reasons that the S3700 is more expensive is because it delivers consistent performance over the lifetime of the drive, increases reliability by adding an additional array of flash, and it is rated for endurance.

Consistent performance

The reasons vary from firmware issues to controller design but all SSD drives suffer from some degree of performance inconsistency. This means that some number of requests made to the drive are going to perform outside of the on-paper averages listed for the drive. Intel has designed the S3700 so that it favors consistent performance over sharp peaks and valleys. Intel went as far as creating a standard for performance consistency.

Intel certifies that 90% of read and 85% of write operations handled by the 100GB S3700 drive will be consistent with specifications and 90% of both read and write on the larger models will be consistent with specifications.


Since all flash memory has limits to the number of times it can be programmed and erased the storage industry has chosen the term endurance to describe this limitation in SSD drives as a whole. Intel has built the S3700 series of drives to withstand 10 complete disk writes/day over a 5 year period. This means that you could fill and erase an entire 100GB S3700 drive 18,250 times (or the equivalent of 1.825PB of writes)


Both the Intel SSD 520 series and the Intel SSD S3700 series are built using MLC flash but while the 520 is targeted more to consumers the S3700 is targeted to the datacenter. The S3700 increases reliability by including a spare array of flash which can be used in the event of a flash failure.

Using SSDs in dedicated servers

Adding an SSD to a dedicated server is one of the easiest ways to achieve a performance boost for IO restricted systems. There are several ways to use SSDs in servers. The most obvious is install the SSD into the server and use it as primary storage. Another way that you can leverage SSD is by using it as a cache tier. In SSD caching "hot" or frequently accessed data is dynamically copied to the SSD. Then subsequent access of that data will take advantage of the increased performance of the SSD without the added cost of replacing all of your storage with SSD.

In Linux you can create a software SSD cache using flashcache which was developed by Facebook and released via GPLv2. You can also create SSD caches in hardware on high-end RAID controllers.

XLHost has been installing the Intel 520 series SSD drives in dedicated servers since they were launched and we have not had reported performance or reliability issues with them. That being said we believe the additional confidence that is provided by the feature set of the S3700 series should be considered for applications with critical performance and reliability requirements.

XLHost is now offering the S3700 100GB and 200GB models in stock and all capacities are available for custom order. As always contact us if you have any questions!


Tags: ssd, storage, dedicated servers

Is your dedicated server being used for DDoS attacks?

Posted by Drew Weaver on Thu, Mar 28, 2013 @ 08:57 AM

First, let me welcome you to the new XLHost blog and introduce myself. I am Drew Weaver, Chief Technical Officer at XLHost. If you're a customer of XLHost we have most likely interacted at some point in the past. I have been with XLHost since 1999 and it is basically my job to oversee the operation of anything that has a blinking light and make sure that our amazing technical support staff continues to be the best in the industry. My posts will likely tend to skew towards technical matters so bare with me if my geekness shines through. (I will try to keep it in check)

Welcome to the XLHost blog

It is unfortunate that my first post on our new XLHost blog is about a record breaking DDoS attack but I felt that this topic was important enough to garner some additional attention and to raise awareness on this issue. You may have read recently about the record breaking 300Gbps distributed denial of service (DDoS) attack which has been targeting Spamhaus. Spamhaus is an organization of volunteers whom maintain lists of IP addresses which are used to send spam, spread malware, and generally make the Internet less enjoyable for You and your users. Spamhaus is frequently the target of denial of service attacks but never anything as large as this.

spamhaus ddos

The bulk of the attack traffic was delivered using a technique known as DNS amplification (less specifically UDP amplification). With DNS amplification the attacker sends a DNS query to hundreds or thousands of open DNS resolvers with the source IP address set to the IP address of the target. The hundreds or thousands of open DNS resolvers then send their answers back to the target's IP address. This means that the attacker sends 36 bytes to the open resolver and the resolver replies with as much as 3000 bytes (that is the amplification part). For example if the attacker has a 100Mbps connection to the Internet and they used the entire usable amount (91Mbps) to generate requests; the attack traffic could be as much as 7.6Gbps

This is possible because many Internet Service Providers still allow traffic to leave their networks for IP addresses that they are not responsible for. (This is called IP address spoofing) Mechanisms to prevent IP address spoofing were first proposed in BCP 38 which was originally published in May of 2000. Unicast Reverse Path Verification (URPF) was added as a feature in most routers by 2005 and presently it is a feature available in all ISP router platforms. The only reason that URPF is not effective is because many ISPs choose not to implement it. XLHost has had URPF deployed since 2007.

Since there is very little chance that we can convince large networks to deploy URPF, let's instead talk about what XLHost can do, and what You can do to combat UDP amplification. If you are running a DNS server such as Microsoft DNS, ISC BIND, or PowerDNS on a server connected to the Internet You should make sure that the server is not an open DNS resolver.

XLHost will be launching a free tool very soon that will allow customers to scan their dedicated servers, VPS servers, or Cloud servers to determine whether they are open DNS resolvers. We will also be scanning our network to find any open DNS resolvers and assist customers in closing down open DNS resolvers.

Please let us know if you have any questions about this issue. I recommend anyone interested in any of the information in this post check out the excellent Open DNS resolver project website. There you will find more details on the problem and tools you can use to scan and close open DNS resolvers.

Once again welcome to the blog!


Tags: DNS, security, dedicated servers