One might be wondering why the x1.32xlarge instance, with its monster amount of memory and cutting-edge features, even exists. Its specs are gargantuan compared to the other EC2 families. Its on-demand price per hour is equivalent of a decent Portland lunch. Let's dig in.
One might be wondering why this EC2 instance with a monster amount of memory and cutting-edge features even exists. Its specs are gargantuan compared to the other EC2 families. Its on-demand price per hour is equivalent of a decent Portland lunch.
Enterprise AWS users clamored for an EC2 instance with more memory and horsepower to handle their in-memory enterprise apps and workloads, and AWS listened. According to a recent AWS blog article, the x1.32xlarge is now available for use.
Let’s take a look at what the x1 family features and how it compares to the current line of r3 memory-optimized EC2 instances.
Quick specs and pricing
Here’s a quick comparison between two of the larger r3 instances and the new x1. Currently, the x1 is a family of one 32xlarge instance…for now.
Comparing the x1 with r3s
||Linux On-demand price
||Cost per GB accessed
||3840 GB (2 * 1920 GB SSD)
||$13.338 per hour
||$0.0068 per GB
||320 GB SSD
||$1.330 per hour
||$0.010 per GB
||640 GB (2 * 320 GB SSD)
||$2.660 per hour
||$0.010 per GB
The gargantuan x1.32xlarge actually has the most cost-efficient access to available memory.Beyond the usual hardware specs, the AWS blog reports the x1 featuring Single Device Data Correction and a few other high-performance compute functions, like TSX-NI and AES-NI (crypto acceleration).
x1 gives real-time, in-memory databasing a boost
From what we can infer from the AWS announcement, the x1 gives enterprise operational teams the means to run a single x1.32xlarge to cater to more real-time, in-memory database workloads, instead of wrangling many upper-end r3.4xlarges or 8xlarges to get the job done. The x1 can run those applications and functions in a more cost-efficient manner, within one EC2 instance to manage and monitor cost and usage data from.
While fleets of r3 instances are currently used and suited to continue to run distributed memory processing, like Solr, applications that require high-performance, real-time in-memory database access, like Neo4J and Titan, can potentially see immediate benefits from an instance like x1. Whether it’s running in-memory graph databases or SAP HANA, the x1 has the immensely high amount of instance memory to handle these operations and workloads.
As these enterprise applications grow in user base and capabilities, it’s nice to know that the x1.32xlarge has the capacity to scale with those increasing needs. The sheer amount of performance and power will stave off the need to add on additional instances, at least for a while.
Migrating from r3 might be at a cost for Reserved Instance users
AWS users who want to start fresh with the x1 can sign up for on-demand pricing, or invest a bit into 1 or 3-year Reserved Instance rates to lock in long-term savings. Dedicated Hosting for the x1.32xlarge is also available at $14.672 per hour.
Users who are considering a migration from their current fleet of r3 instances over to x1, and who have purchased Reserved Instance terms already, will have to consider what to do with those r3 RIs, as they won’t be applicable to the x1. One option could be to modify those RIs to suit other operational needs that require memory-intensive r3 instances. Another option is to put those used RIs on the AWS Reserved Instance marketplace to recuperate some costs.
Operationally, some teams might wait to migrate if they find that from an administrative and management perspective, their current fleet of r3 instances are handling the work well. Closely monitoring the cost and usage data of their r3 instances is a great way to start the conversation around whether an upgrade is necessary. AWS users wanting to see this type of cloud cost management in action should get in touch.
It’s going to be interesting to start seeing AWS customer stories using the x1.32large and to see what AWS has in store for other members of this new, memory-intensive, high-octane instance family.