apps-on-azure

Hackathon!

The hackathon is your chance to spend some decent time modelling and deploying a Kubernetes app on your own.

You’ll use all the key skills you’ve learned in the course, and:

That’s why the hackathon is so useful!

It will help you understand which areas you’re comfortable with and where you need to spend some more time.

And it will give you an app that you modelled yourself, which you can use as a reference next time you model a new app.

ℹ There are several parts to the hackathon - you’re not expected to complete them all. In some classes we have a whole day for this, in others just a few hours. Get as far as you can in the time, it’s all great experience.

Remember to use minikube cluster for this Hackathon:

 kubectl config use-context minikube

Part 1 - Welcome to Widgetario

Widgetario is a company which sells gadgets. They want to run their public web app on Kubernetes.

They’ve made a start - all the components are packaged into container images and published on Docker Hub. Your job is to get it running in Kubernetes for them.

Use this architecture diagram as the basis to model your YAML. It has the port numbers, Docker image tags and the number of replicas needed for each component:

It’s not much to go on, but it has all the information you need for the first stage.

Hints The component names in the diagram are the DNS names the app expects to use. And when you're working on the YAML, it's easier to start with one replica for every component and get it working before you scale up.


When you’re done you should be able to browse to the Service port on your cluster and see this:

Solution If you didn't get part 1 finished, you can check out the specs in the sample solution from `hackathon/solution-part-1`. Deploy the sample solution and you can continue to part 2: ``` kubectl apply -f hackathon/solution-part-1/products-db -f hackathon/solution-part-1/products-api -f hackathon/solution-part-1/stock-api -f hackathon/solution-part-1/web ```


Part 2 - Configuration

Well done! Seems pretty straightforward when you look at the YAML, but now we need to go to the next stage and stop using the default configuraion in the Docker images.

Why? If you run docker image inspect widgetario/products-db:21.03 you’ll see the database password is in the default environment variables, so if someone manages to get the image they’ll know our production password.

Also the front-end team are experimenting with a new dark mode, and they want to quickly turn it on and off with a config setting.

You’ll need to model configuration for all the components, but the product teams can’t help so you’ll need to figure out what goes where yourself.

Hints You have the app working from part 1, so you can investigate the current configuration by running commands in the Pods (`printenv`, `ls` and `cat` will be useful).


When you’ve rolled out your update, the UI will be updated but the products and stock details should be the same:

Solution If you didn't get part 2 finished, you can check out the specs in the sample solution from `hackathon/solution-part-2`. Deploy the sample solution and you can continue to part 3: ``` kubectl apply -f hackathon/solution-part-2/products-db -f hackathon/solution-part-2/products-api -f hackathon/solution-part-2/stock-api -f hackathon/solution-part-2/web ```


Part 3 - Storage

It’s going well, but we need to think about storage.

The Stock API has a nice caching feature: when it fetches data from the database it stores a local copy in the filesystem. We want that cache to be kept available if the app crashes, but it doesn’t need persistent storage - it’s just a performance boost.

And the DB team are keen to spend more time looking at running Postgres in Kubernetes, with persistent storage for the data. They’ve built an alternative Docker image which can run a replicated database as a primary and a secondary.

Hints You'll need to change configuration for the APIs, but for the database you'll need to switch to a different type of workload altogether. And if you have product-db data volumes from other labs, they'll need to be removed.


The app won’t look any different if you get your update right. If not, you’ll need to dig into the logs.

Solution If you didn't get part 3 finished, you can check out the specs in the sample solution from `hackathon/solution-part-3`. Deploy the sample solution and you can continue to part 4: ``` # remove the old database: kubectl delete deploy products-db kubectl delete svc products-db # you may have some PVCs lingering from the labs: kubectl delete pvc -l app=products-db # deploy the new specs: kubectl apply -f hackathon/solution-part-3/products-db -f hackathon/solution-part-3/products-api -f hackathon/solution-part-3/stock-api -f hackathon/solution-part-3/web # rollout the APIs to load new config: kubectl rollout restart deploy/products-api deploy/stock-api ```


Part 4 - Ingress (Optional)

We’re nearly there, but all these non-standard ports are no good. We’d like to get some proper DNS names set up, so we can publish the web app and the products API on standard HTTP ports:

Hints Ingress controllers are generic components, so you can deploy one we used in an earlier lab. And remember how Kubernetes doesn't check to see if the port names match when you deploy Services and Pods? It doesn't check with Ingress objects either.


The app will still look the same from the new domains. If not, you’ll need to look at endpoints and object descriptions.

Solution If you didn't get part 4 finished, you can check out the specs in the sample solution from `hackathon/solution-part-4`. Deploy the sample solution and you can continue to part 5: ``` kubectl apply -f hackathon/solution-part-4/ingress-controller -f hackathon/solution-part-4/products-db -f hackathon/solution-part-4/products-api -f hackathon/solution-part-4/stock-api -f hackathon/solution-part-4/web ``` Update your hosts file: ``` # Windows (run as Admin) ./scripts/add-to-hosts.ps1 widgetario.local 127.0.0.1 ./scripts/add-to-hosts.ps1 api.widgetario.local 127.0.0.1 # Linux/macOS ./scripts/add-to-hosts.sh widgetario.local 127.0.0.1 ./scripts/add-to-hosts.sh api.widgetario.local 127.0.0.1 ``` - check the app at http://widgetario.local - and the API at http://api.widgetario.local/products


Part 5 - Productionizing

All right!

We’ve commissioned a 200-node Kubernetes cluster in the cloud and we’re ready to go.

Now’s your chance to make any last updates to the Pod spec you think we need before we go to production.

Hints No :) This is up to you to see what you think is important.


Solution If you didn't get part 5 finished, you can check out the specs in the sample solution from `hackathon/solution-part-5`. My main focus in the samples is productionizing Pod specs: - adding readiness and liveness probes - setting resource limits - increasing security My changes are all in the Deployment and StatefulSet objects - if you diff the files between parts 4 and 5, you'll see where the changes are. Deploy: ``` kubectl apply -f hackathon/solution-part-5/ingress-controller -f hackathon/solution-part-5/products-db -f hackathon/solution-part-5/products-api -f hackathon/solution-part-5/stock-api -f hackathon/solution-part-5/web ``` The StatefulSet rollout takes a few minutes, and the app may not be responsive until both Pods are up. **And** there are CPU resources in the specs, so if your cluster doesn't have enough capacity you may see Pods stuck in the _Pending_ status, so you'll need to adjust the values. And we're good to go.


Part 6 - Observability

Not quite. The SRE team won’t let the app go live without any observability into the components. They want to confirm the app will integrate nicely with the standard stacks they use.

All the API and web servers publish metrics, so we don’t need to change any code. To prove monitoring is usable you’ll need to:

Here’s the dashboard you should see:

And then you’ll also need to set up centralized logging:

Here’s what the Kibana dashboard should look like:

Hints The monitoring and logging stacks are standard components, so you can run them from the specs we used in earlier labs. You won't need to tweak the Prometheus or Fluent Bit configuration, unless you're using a custom namespace for your Widgetario Pods...


The app will still look the same. You should see those fancy dashboards and be able to search for logs for each component.

Solution If you didn't get part 6 finished, you can check out the specs in the sample solution from `hackathon/solution-part-6`. The specs in the `widgetario` folder have been re-organised to have one YAML files for each component. Deploy the sample solution and you can continue to part 7: ``` kubectl apply -f hackathon/solution-part-6/monitoring -f hackathon/solution-part-6/logging -f hackathon/solution-part-6/ingress-controller -f hackathon/solution-part-6/widgetario ``` There's a change to the StatefulSet spec (to explicitly opt out of metrics collection), and it will take a while for the rollout to complete. You can browse to the UIs using NodePort or LoadBalancer Services: ``` kubectl get svc -A -l kubernetes.courselabs.co=hackathon ``` * Grafana on http://localhost:30003 or http://localhost:3000 * Kibana on http://localhost:30005 or http://localhost:5601


Cleanup

Uninstall your Helm charts, e.g:

helm uninstall -n widg-smoke widg-smoke

helm uninstall -n widg-uat widg-uat

And then delete objects - the sample solution applies a label we can select on:

kubectl delete all,ingress,cm,secret,pvc,rolebinding,clusterrolebinding,role,clusterrole,ns -l kubernetes.courselabs.co=hackathon

kubectl delete ns widg-smoke widg-uat