Today, AWS is the most popular public cloud provider. To date, more than 100 unique services are available for consumption across over 20 categories. These services make it easy to take advantage of the services you need but are not your core IP. One interesting observation I have found is that in many cases, to consume an AWS service, you actually need one or more additional AWS services. Read on to learn more!
AWS makes it really easy to get started with their services. For example, want to configure an elasticsearch cluster? A few clicks on the Elasticsearch Service, and it will be provisioned. Want to resize the cluster? No problem, a few more clicks, and the cluster will be reconfigured. The interesting part actually happens several days or weeks later — when you run out of disk space. More than likely, you only want to retain data for a limited amount of time. Unfortunately, you cannot configure this from the UI. Of course, an API can perform the operations, but then you need a location to run the API calls from. AWS’s answer? Lambda! To be fair, it is not required that you run Lambda to delete data from Elasticsearch, but you must use the API, and the script you create must run somewhere.
Taking a step back, this marketing genius is actually seen much earlier but may not be as apparent. For example, the cluster and instance health graphs provided are backed by AWS CloudWatch — logging if you have configured that. In addition, Elasticsearch runs on top of AWS EC2 (this fact is more abstracted from you as you do not see them in the EC2 console, and the cost is included in the price of the Elasticsearch service). At a minimum, Elasticsearch requires CloudWatch, but over time you may find yourself considering Lambda as well.
From the AWS perspective, this is brilliant. Hook the customer with one service and then fill a gap you have by providing another service with a working example (in this case, code) to perform additional operations (e.g., delete old data). As you start to consume AWS services, you will see they do this everywhere. Want to deploy a web server on EC2? You will likely need Route53, CloudFront, and maybe even Global Accelerator. Want to consume Kinesis? You will likely need the Kinesis Client Library (KCL), which has a dependency on DynamoDB. Their ability to do this subtly (i.e., requiring work and offering a solution) is beneficial to them for at least a few reasons:
- “Land and expand” — a customer starts with one service, but then gets exposure to others which increases the likelihood of AWS being able to make more money.
- Vendor lock-in — while some of the services are open-source the interactions with them are often complex meaning that it is non-trivial for a customer to move to another solution.
- Minimizing competition — AWS has a local solution and has likely made it easy to get started.
From a product perspective, this is nirvana. Coming up with a great idea and productizing it is generally easy, but getting adoption and, more importantly, increased revenue is less easy. By loosely coupling services together, AWS increases the probability of a customer leveraging more and more services. The exposure alone is free marketing, and in large companies, some Engineer is likely going to be interested in playing with new services.
Is this bad for the customer? It depends. On the one hand, customers get more value in a shorter amount of time. In addition, they can focus on their core IP and not all the surrounding stuff required to make it all work. On the other hand, the public cloud is expensive, and vendor lock-in can be a real problem. In the Cloud Native world, the infrastructure has been commoditized by technologies including Kubernetes. The orchestration engine promises portability, but the devil is always in the details. It can be straightforward to decouple applications from service dependencies (e.g., Kinesis to Kafka). Still, depending on the services you are consuming and how they are configured, this decoupling can prove non-trivial. Long story short, it is unlikely, even in the Cloud Native world, that you can move from, say, AWS to GCP easily.
So why am I writing this? Because:
- Knowledge sharing — Not everyone is aware of the tactics vendors are taking and the above information can be helpful when making company decisions.
- Product Management — As Head of Product at a stealth startup, I find these tactics interesting. If every vendor were able to apply them they could increase their reach and bottom line, but the above scenario is not applicable to every market.
What vendor tactics are most interesting to you and what is your response to them?
© 2019 – 2021, Steve Flanders. All rights reserved.