Since sharing is caring, Digia’s experts Jere Valtanen and Timo Relander wanted to share a hands-on guide about how to set up Kubernetes in Google Cloud.
Kubernetes is an open source platform for managing containerized workloads and services. A typical use case for Kubernetes is orchestrating and running docker containers.
Here’s a short explanation of the most important Kubernetes components:
Cluster: The cluster is the master machine that controls the entire application. When a program is deployed onto the cluster, it automatically distributes the work to individual node machines. When deleting or adding nodes, the cluster reacts by shifting the work as necessary. The ideal state for the programmer is to not have to worry about which nodes are actually running the program.
Node: Nodes are the components running applications inside Kubernetes. It is a machine with computing power, typically a physical machine or a virtual machine hosted in the cloud. Essentially each node is a set of CPU and RAM resources that can be utilized by the Kubernetes cluster for running the program.
Pod: Pods run inside nodes. They can run one or more containers while sharing the same resources and local network. If you have containers that require easy communication between each other, it’s a good idea to run these containers in the same pod. Pods are the component used in load balancing. New replicas of pods are created under load to keep the program running.
Using Google Cloud Kubernetes Engine (Linux)
The purpose of this guide is to get you started using Kubernetes in the Google Cloud platform. To complete this guide you will need a Google account and a project created in the Google Cloud platform.
The Google Cloud Platform console can be accessed here and is occasionally used in this guide for firewall settings. Make sure to familiarize yourself with the console as well as you can use it to view your newly created Kubernetes cluster and its nodes/pods.
Most of the operations will be completed using the Google Cloud SDK command-line interface. You can follow the quick start guide or complete the following guide:
1. Make sure that Python 2.7 is installed on your system:
2. Download Google Cloud SDK for Linux from here
Extract downloaded archive to any location on your device
3. Navigate to the folder where the extracted archive is and run:
4. Initialize the sdk with the following command and follow the instructions (Choose project and default region if you wish, check the list of available regions here.
5. If you skipped the zone selection during initialization run the following command (this one sets it to Finland)
gcloud config set compute/zone europe-north1
6. Install the kubectl component to the cloud SDK. Kubectl is a Kubernetes command-line interface tool
gcloud components install kubectl
You can upload docker images to use in Kubernetes into the Google Cloud container registry by following the quick start tutorial.
Alternatively, you can just use a publicly available docker image.
Limitations and Requirements with network policy regarding Kubernetes cluster
Your cluster must have at least 2 nodes of type n1-standard-1 or higher. The recommended minimum size cluster to run network policy enforcement is 3 n1-standard-1 instances.
Network policy is not supported for clusters whose nodes are f1-micro or g1-small instances, as the resource requirements are too high for instances of that size. Because of this, we will not be setting network policies in this tutorial.
Creating Kubernetes engine cluster
Now we will create a Kubernetes engine cluster with small machine instances. NOTE: Do not create a micro instance. It has too little resources to run Kubernetes.
To create a regular cluster without Prometheus monitoring, run the following:
gcloud container clusters create <CLUSTER NAME> --machine-type=g1-small --num-nodes 1 --enable-autoscaling --min-nodes 1 --max-nodes 2
To create a cluster with Prometheus monitoring, you will need to run the beta cluster with Kubernetes stack driver enabled:
gcloud beta container clusters create <CLUSTER NAME> --machine-type=g1-small --enable-stackdriver-kubernetes --cluster-version=1.10 --num-nodes 1 --enable-autoscaling --min-nodes 1 --max-nodes 2
Let’s look at the command we just ran:
Machine type: This sets the type of machine instance that is created.
Zone: Sets the region that the Kubernetes cluster will be deployed in. If left empty, will defaul to the region set in gcloud config. Typically, nodes will be deployed into three sub-zones inside the region i.e. Europe-west4-a, Europe-west4-b, Europe-west4-c. To deploy into one sub-zone simply set --zone to one of the sub-zones
Num-nodes: Sets the initial number of nodes that are created per zone.
Enable-autoscaling: Enables automatic scaling of nodes.
Min-nodes: Sets the minimum number of nodes that can be scaled down to.
Max-nodes: Sets the maximum number of nodes that can be scaled up to.
Check Google Cloud's website for more command options.
Authenticating the cluster
After creating your cluster, you need to get authentication credentials to interact with the cluster. After this, all kubectl commands will point to your newly created cluster. To authenticate, run the following command:
gcloud container clusters get-credentials <CLUSTER NAME>
Next, you want to run a program inside your cluster using a docker image from a docker repository (In this case we're using google cloud container registry)
kubectl run <APP NAME> --image <IMAGE ADDRESS:TAG> --port <PORT NUMBER TO OPEN>
kubectl run grafana --image grafana/grafana:latest --port 3000
Alternatively, you can create a configuration file to run your docker image:
# This file configures the deployment
name: <DEPLOYMENT NAME>
# The replica set ensures that at least 3 instances of the app are running on the cluster.
# For more info about Pods see: https://cloud.google.com/container-engine/docs/pods/
- name: node-app
# Replace [GCLOUD_PROJECT] with your project ID or use `make template`.
image: <IMAGE ADDRESS:TAG>
# This setting makes nodes pull the docker image every time before
# starting the pod. This is useful when debugging, but should be turned
# off in production.
- name: http-server
You now have successfully deployed a container into your Kubernetes cluster. You can check the status of your nodes by running the following command. Note that it may take a few moments for the pods to begin running.
kubectl get pods
In your google cloud portal's VPC network settings, delete the firewall rules that allow traffic from everywhere to your app if you wish. Make sure to also create a rule to allow access from your own IP.
Next you will create a LoadBalancer service, that will expose the application to the outside network:
kubectl expose deployment <APP_NAME> --type LoadBalancer --port <PORT NUMBER> --target-port <PORT NUMBER>
Your application should now be running and ready to be accessed. To find the IP address of your application, run the following command and check the EXTERNAL-IP field:
kubectl get svc
If you enabled the Kubernetes stack drive during cluster creation for Prometheus support, follow the quick start here to setup Prometheus.
If you have any questions regarding the guide, please leave a comment below!