Welcome to my new series of posts aimed at crafting a robust local development environment that mirrors your production setup in the cloud. The essence is to provide a playground where you can debug, test, and get a feel of your applications before they hit the production stage. Over the span of these posts, we’ll be tinkering with various tools and technologies, knitting them together to form a setup that’s not only developer-friendly but also educative.
In this first post, we are setting the stage with a local Kubernetes cluster using kind
, KEDA
and LocalStack AWS
to bring in a semblance of cloud functionalities. Before diving into the setup, let’s have a brief rundown of these components:
- Kubernetes: An open-source platform designed to automate deploying, scaling, and operating application containers.
- kind (Kubernetes in Docker): A tool for running local Kubernetes clusters using Docker container “nodes”.
- KEDA (Kubernetes Event-Driven Autoscaling): A set of components that extends Kubernetes to provide event-driven autoscaling for every container.
- LocalStack AWS: A fully functional local AWS cloud stack for testing and mocking AWS services locally.
Now, with our actors introduced, let’s dive into creating a local development environment that will mimic our cloud setup.
Prerequisite: Docker Installation
Before we go on and install kind
, make sure that Docker is installed on your machine as kind
requires it to create clusters. If Docker isn’t installed yet, follow the Docker installation instructions.
Setting Up kind
Now onto kind
. This tool will house our local Kubernetes cluster, making the management of our local environment a breeze. Install kind
using brew with the following command:
brew install kind
If you’re not on macOS or prefer a different method of installation, refer to kind
’s GitHub page for alternatives.
Once kind
is installed, create a cluster with three nodes to mimic a more realistic production environment using the following command:
kind create cluster --config kind-config.yaml
Where kind-config.yaml
contains:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
Voila! You’ve spun up a local Kubernetes cluster with three nodes. To interact with your new cluster, ensure you have kubectl
installed. If not, follow these instructions.
Integrating KEDA
Next, we usher in KEDA
. This component will let our Kubernetes containers scale based on event metrics, a scenario we often encounter in production. To install KEDA, run the following helm commands:
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
Incorporating LocalStack AWS
Now, it’s LocalStack
’s turn to join the party. This component will simulate AWS services locally. Start LocalStack with Docker using the following command:
docker run -d -e SERVICES=s3 -p 4566:4566 localstack/localstack
aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket your-bucket
aws --endpoint-url=http://localhost:4566 s3api put-object --bucket your-bucket --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2
We’ve just initiated LocalStack with the S3 service. Feel free to replace or append to the SERVICES
environment variable to include other AWS services as needed.
Wiring It All Together
With all pieces in place, let’s deploy a simple application to our local Kubernetes cluster and witness the setup in action. Create a app.yaml
file with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-app
image: your-image:latest
env:
- name: AWS_S3_ENDPOINT
value: http://localhost:4572
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: example-app-scaler
spec:
scaleTargetRef:
name: example-app
triggers:
- type: s3
metadata:
awsRegion: us-east-1
bucketName: your-bucket
queueLength: "5"
Apply this manifest to your cluster:
kubectl apply -f app.yaml
Voila! You now have a local Kubernetes setup intertwined with KEDA and LocalStack AWS, ready to simulate a production-like environment on your machine.
Conclusion
This setup marks the first milestone in our journey towards creating a resilient local development environment. It’s a playground to explore, develop, and test applications in a controlled setting before they venture out into the cloud.
Stay tuned as we continue to refine our local setup in the upcoming posts, ensuring a smoother transition from your local machine to the cloud production environment. Until then, happy coding!
To be continued… with more insights on optimizing your local dev setup.