Americas

  • United States

AWS boosts its infrastructure for memory-intensive tasks

News Analysis
Feb 22, 20233 mins
Cloud ComputingGenerative AI

AWS claims its new Amazon EC2 M7g and R7g instances provide 25% better performance vs. the past generation of instances.

post 10 image free trials of hardened vms in aws marketplace
Credit: CIS

Amazon Web Services (AWS) has announced availability of its new Amazon EC2 M7g and R7g instances, the latest generation of instances for memory-intensive applications and running Amazons custom Arm processor, known as Graviton3.

This is the second offering of Graviton3-based instances from AWS. It previously announced specific instances for compute-intensive workloads last May.

Both the M7g and the R7g instances deliver up to 25% higher performance than equivalent sixth-generation instances. Part of the performance bump comes from the adoption of DDR5 memory, which offers up to 50% higher memory bandwidth than DDR4. But there’s also considerable performance gain from the new Graviton3 chip.

Amazon claims that compared to instances run on Graviton2, the new M7g and R7g instances offer up to 25% higher compute performance, nearly twice the floating point performance, twice the cryptographic performance, and up to three times faster machine-learning inference.

The M7g instances are for general purpose workloads such as application servers, microservices, and mid-sized data stores. M7g instances scale from one virtual CPU with 4GiB of memory and 12.5Gbps of network bandwidth to 64 vCPUs with 256GiB of memory and 30Gbps of network bandwidth. (A GiB is a gibibyte, a different method of measuring storage. The term 1GB implies 1GB of storage, but it actually represents 0.93GB. To avoid confusion and promote accuracy, 1GiB represents 0.93GB, but the term gibibyte hasn’t caught on.)

The R7g instances are tuned for memory-intensive workloads such as in-memory databases and caches, and real-time big-data analytics. R7g instances scale from 1 vCPU and 8GB of memory with 12.5Gbps of network bandwidth to 64 vCPUs with 512GB of memory and 30 Gbps of network bandwidth.

New AWS AI partnership

AWS has also announced an expanded partnership with startup Hugging Face to make more of its AI tools available to AWS customers. These include Hugging Face’s language-generation tool for building generative AI applications to perform tasks like text summarization, answering questions, code generation, image creation, and writing essays and articles.

The models will run on AWS’s purpose-built ML accelerators for the training (AWS Trainium) and inference (AWS Inferentia) of large language and vision models.The benefits of the models include faster training and scaling low-latency, high-throughput inference. Amazon claims Trainium instances offer 50% lower cost-to-train vs. comparable GPU-based instances.

Hugging Face models on AWS can be used three ways: through SageMaker JumpStart, AWS’s tool for building and deploying machine-language models; the Hugging Face AWS Deep Learning Containers (DLCs); or tutorials to deploy customer models to AWS Trainium or AWS Inferentia.

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.