Some interesting AWS updates from 2020

Some interesting AWS updates from 2020

For those of us who have been using AWS for many years, it's easy to forget that the platform is still fairly new. AWS Lambda was only released in 2016, a mere 6 years ago! The entire platform is only about 14 years old, and back when it was released its only offerings were S3, EC2, and SQS. Now AWS has more than 175 services, and more are being developed all the time. Just this year, Amazon released 2,284 news-worthy features to AWS, which can be found here: This list is only new features, and doesn't even include all the incremental updates to existing services.

Over two thousand major releases! You'd have to read over 6 news items a day every day for a year to learn about them all. How could I possibly provide a thorough review?

The answer is that I can't, of course. What I can do is mention a few things that seemed interesting to me. I've split them out into broad categories, so if you're not interested in a category you can skip over it.

Developer Tools

I'm not a big user of Amazon's developer tools, but one thing that jumped out at me was that Amazon added one of my favorite little Azure (and GCP) features – a cloud shell that you can access from a browser. It's really cool to be able to pop open a free little Linux shell that has handy things like Python and Node already installed! Check it out here: AWS CloudShell


Serverless computing evolved significantly this year, and a lot of that was due to enhancements to AWS Lambda. The Lambda changes I found most interesting were:

1 ms billing:

New for AWS Lambda – 1ms Billing Granularity Adds Cost Savings
Since Lambda was launched in 2014, pricing has been based on the number of times code is triggered (requests) and the number of times code executes, rounded up to the nearest 100ms (duration). Starting today, we are rounding up duration to the nearest millisecond with no minimum execution time.

Higher memory and compute limits:

AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions
AWS Lambda customers can now provision Lambda functions with a maximum of 10,240 MB (10 GB) of memory, a more than 3x increase compared to the previous limit of 3,008 MB. This helps workloads like batch, extract, transform, load (ETL) jobs, and media processing applications perform memory intensive operations at scale.

Support for Docker containers:

New for AWS Lambda – Container Image Support
With Lambda, you upload your code and run it without thinking about servers. Many customers enjoy the way this works, but if you’ve invested in container tooling for your development workflows, it’s not easy to use the same approach to build applications using Lambda. To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size.

And hooks to Kinesis and DynamoDB for data processing:

AWS Lambda now makes it easier to build analytics for Amazon Kinesis and Amazon DynamoDB Streams
Customers can now use AWS Lambda to build analytics workloads for their Amazon Kinesis or Amazon DynamoDB Streams. For no additional cost, customers can build sum, average, count, and other simple analytics functions over a contiguous, non-overlapping time windows (tumbling window) of up to 15 minutes per shard. Customers can consolidate their business and analytics logic into a single Lambda function, reducing the complexity of their architecture.


I feel like AWS Batch is one of the more underrated Amazon services. I've been using it off and on since it was released in 2016, and it's a fantastic way to run batch data analytics jobs. It just got even better though with the addition of Fargate as a compute target for Batch. The caveat is that if you need lots of memory for your batch jobs you can't go over 30 GB RAM on Fargate. If you want to use regular AWS Batch with auto-scaling groups you can get up to about 4 TB RAM with a x1e.32xlarge (the 24 TB metal instances are special cases). If you can use Fargate, it could save you some money since Fargate is billed on a per vCPU/hr and per GB/hr basis, and you don't have to wait for EC2 instances to spin up and down.

Serverless Batch Scheduling with AWS Batch and AWS Fargate
Today AWS Batch introduced the ability for customers to specify AWS Fargate as a compute resource for their AWS Batch jobs. With AWS Batch support for AWS Fargate, customers now have a way to run jobs on serverless compute resources, fully managed from job submission to completion. Now, you only need to submit your analytics, map reduce, and other batch workloads and let AWS Batch and AWS Fargate handle the rest.


A bunch of new Graviton2 processors went into general availability this year. These processors are custom-built by AWS, and as long as you have standard Linux-based workloads they'll probably save you money. I know that 'standard Linux workload' is pretty vague, but basically anything that can be compiled for ARM can be run on these:


ECR (Elastic Container Registry) used to only allow private Docker container images. Now you can host public ones, that people can access with or without an AWS account! Check it out: Amazon ECR Public: A New Public Container Registry

That's it!

I'd love to hear if there were any AWS features that were released in 2020 that you're using heavily. Feel free to send me a note at or tweet @joe_stech. Take care!

Subscribe to The Cloud Consultant

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.