re:Invent the Fourth

I remember my first re:Invent in 2012. 5000 people were talking about the new services from Amazon they use and how they use them, how the cloud changed and shrunk their delivery processes, and much more. It was the time when I realized that I was living a new revolution.

In my mind, I kind of summed up every re:Invent as one main technology emergence.

Of course these are just shortcuts as every re:Invent had way more new announcements, for instance, 2014 with Lambda, which was the premise of the 2015 serverless cloud


This year, there were 19 000 people at re:Invent. If we keep adding 5000 people every year we'll soon have to reserve all of Las Vegas for the event !
This huge conference with a lot of AWS partners explaining their ecosystem is really great


Luckily Las Vegas is full of ressources and the Keynote hall also seems to grow automatically every year. Let's take a look at what these Keynotes announced


 AWS EC2 container registry

Docker is a really nice technology but you would also like to have your own private registry as your application is rarely completely open source. Maintaining a registry is not the funniest thing to do so why not use a packaged service for it

 AWS EC2 X1 instance

My first computer was a 133 Mhz Pentium with 16Mo of RAM and 2.5Go of hard drive, so when i see this number :
100 vCPUs - 2Tb of RAM , i cannot imagine how many of this first PC i would have need to get the same amount of memory ( 2^17 = 131 072 if you were wondering :) )

On the other side the T2.nano instance makes me wonder if they have an army of raspberry pi :)

 AWS Database Migration Service

Migrating one database in production is always hard without downtime, because as you import into your new database, the old one is CHANGING, and you need to keep track of those changes. That is what AWS Database Migration is for ! I'm looking forward to test it.

 AWS Quicksight

Business intelligence and data visualisation is one of the keys to the future as we now have the ability to record and store everything

 AWS Snowball

I had a discussion a few days before re:Invent about how we can import large amount of data into the cloud. Amazon already had the import solution service but to see the new solution from the company that has an incredible logistic ( yes I am still an amazed customer of Amazon Prime ) is even more impressive !

 AWS RDS MariaDB engine

Because Opensource is all about choice, you can go with MariaDB instead of MySQL or Aurora now


Nowadays we have more and more connected devices or captors that generate data. This data needs to be centralized for analysis and AWS, of course, has a cloud solution for that

 AWS Kinesis Firehose

It helps you streamline the analysis of your data and storage

 AWS Config

The more complex your architecture is, the more difficult it becomes to track configuration changes. So, now you can track all those changes, take snapshots anytime so you can rollback or reproduce a specific configuration

 AWS Inspector

A lot of rules apply from different norms, so it's always good to have a keeper that can audit you quick and tell you what you need to improve, thus increasing the level of confidence on your cloud security


The hackers are always creative and they always find new ways to penetrate your application. So adding another layer of protection (that will be update by specialists) on top of my application, sure gives me a some relief

 AWS Lambda

I like to keep the best for the end :)


Python environment

After Node.js and Java here is the new language available. As a Python developer, i'm happy to be able to choose between Javascript, Java or Python :)


Scheduled Lambda is a nice way to run crons :)


Long running function will follow the scheduled lambda


Looks like we have all it needs now to not have any server at all.

Serverless Cloud emergence    

The inconvenient with EC2 instances is that you need to configure CloudWatch, ElasticBeanStalk to handle the fail of instances. You also need to handle the order of reserved or spot instances to lower the cost, etc.

If your service is not frequently used but you cannot predict the usage, you have to have at least one server up and running, or even two to avoid down time.


As a developer, I love to work on small side projects that don't go into real production but need an infrastructure to handle a few requests a month. Their requirements are usually very simple : some REST api, some SQL or noSQL storage, and finally some binary storage


So let's see what we have to fit our needs :

  • REST Api : Amazon API Gateway to expose AWS Lambda functions
  • SQL : Unfortunately, we still don't have Aurora as a Service, and I don't want a EC2 running to answer 100 requests a day
  • NoSQL : DynamoDB is a good fit. It's pay as you use (even if the invoice model is quite hard to understand)
  • Binary storage : This good old S3 is there for us, EBS or EFS requires EC2 instances.
  • Rolling update : We could use the Route 53 to do that

    As a beer lover, what better example than this Simple Beer Service! Of course, the big data part requires Redshift cluster but could be run only a few hours a month.



Author: Rémi Cattiau
Email: remi@cattiau.com
Twitter: @loopingz
GitHub: @loopingz

See you on November 28th 2016 for another amazing week