Are All AWS ECUs Created Equal? | Datadog

Are all AWS ECUs created equal?

Author Alexis Lê-Quôc

Published: 8月 28, 2013

While Amazon Web Services (AWS) has a large number of physical servers under management, it does not rent them per se. Rather AWS grants access to these servers in the form of virtual machines, which they call EC2 instances.

Renting instances instead of physical has a number of advantages: it allows AWS to keep its product line the same even if the underlying hardware gets bigger, better and faster every year. As users of servers, we have been accustomed to having a measure to compare servers with one another. Does the same apply to EC2 instances?

To standardize its compute to create various instance types and account for Moore’s Law, AWS created a logical computation unit known as an Elastic Compute Unit (ECU). Unlike units commonly used in engineering, there are scant public details about what an ECU precisely is.

The performance you receive, and the price you pay for an EC2 instance is impacted by the way that AWS has split up compute into these abstracted “chunks” of virtual CPU. In this post we look at the data publicly available about ECUs and draw conclusions about relative performance and price.

Noted Differences in AWS ECU Performance

ECUs equate to a certain amount of computing cycles in a way that is purportedly independent of the actual hardware – 1 ECU is defined as the compute power of a 1.0-1.2Ghz of a 2007 server CPU. As an example, the most common and oldest instance type, m1.large, is rated at 4 ECUs (2 cores of 2 ECUs each). A beefier instance with 64GB of memory, m2.4xlarge, suitable for most databases, is rated at 26 ECUs (8 cores of 3.25 ECUs each). http://ec2instances.info has a very handy summary of the various instance types and their ECU rating.

However, compute, even if split into “equivalent” ECU capacity still relies on underlying hardware to do the work. In other words, the performance of an instance’s compute will ultimately be determined in part by what’s running under the hood. As different instances draw from different physical server models, there is a difference in underlying physical processor quality based on which EC2 instance type is purchased to host an application. ECUs reflect that only partially.

For instance, the server CPUs have more on-die cache (20MB for the E5-2670, 8MB for the X5550, 4MB for the E5507), which helps CPU-intensive applications. Multiple instances of the same instance type, rated with the same ECU will likely have different CPUs. To find out, you can run cat /proc/cpuinfo | grep “model name” if your instance runs linux.

Here is an excerpt of the same command run across instances of the same type.

i-44b2.... model name: Intel(R) Xeon(R) CPU E5410  @ 2.33GHz
i-9634.... model name: Intel(R) Xeon(R) CPU E5410  @ 2.33GHz
i-58a2.... model name: Intel(R) Xeon(R) CPU E5506  @ 2.13GHz
i-00b7.... model name: Intel(R) Xeon(R) CPU E5506  @ 2.13GHz
i-7244.... model name: Intel(R) Xeon(R) CPU E5506  @ 2.13GHz
i-aa21.... model name: Intel(R) Xeon(R) CPU E5506  @ 2.13GHz
i-54c5.... model name: Intel(R) Xeon(R) CPU E5506  @ 2.13GHz
i-d860.... model name: Intel(R) Xeon(R) CPU E5506  @ 2.13GHz
i-5592.... model name: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
i-3200.... model name: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
i-4ebd.... model name: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
i-703c.... model name: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz

You can notice the disparity in CPU models. This means that if you care about which CPU model is running under the hood, you should check using the command above on all your instances.

Additionally, the number of “neighbors”, with whom you’re sharing physical compute can impact performance. The table below presents benchmarks for different instance types:

Instance typePhysical CPU(core count)PassMark per coreCores per EC2 instanceECUs per instanceECU per core
m1.largeIntel E5507 (4)812242
m2.4xlargeIntel X5550 (8)6638263.25
cc2.8xlargeIntel E5-2670 (16)83232882.75

The main conclusion to draw from this table is that on larger instances you are much more likely to run by yourself or with very few neighbors.

The cc2.8xlarge offers 32 cores, which happens to be the total number of “threads” (a.k.a. vCPU) available on a server with 2 Intel E5-2670. So very few instances can run on that server at the same time without compromising its performance.

Conversely, one Intel 5507 CPU with its 4 cores can host at a bare minimum 2 instances. This makes the probability of having neighbors on the same physical hardware higher.

Another aspect to keep in mind when evaluating CPU performance is that noisy neighbors, as well as ECU mismatches in an instance can cause performance issues. These are further detailed, along with remediation recommendations, in our free eBook, The Top 5 AWS EC2 Performance Problems.

AWS ECU Differences in Price

If ECUs were truly a “universal yardstick”, you would pay the same per ECU hour. However, as shown in the table below, the more powerful instances are priced per ECU at roughly 50% of the less powerful instances.

Instance TypeECUs per instanceCost per Instance-hourCost per ECU-hour
m1.large4$0.24$0.06
m2.4xlarge26$1.64$0.06
cc2.8xlarge88$2.40$0.03

Essentially, this leads to a “volume discount” based off of larger instance types. Hence by choosing a larger instance type you can not only achieve higher performance, but also get that capacity at a lower marginal price.

Stay tuned for a more in-depth discussion on how to select the appropriate amount of CPU in your AWS instances.