Deploy Kubernetes Load Balancer Service with Terraform in Google Cloud
- 5 minsOverview?
In Terraform, a Provider is the logical abstraction of an upstream API. This lab will show you how to set up a Kubernetes cluster and deploy Load Balancer type NGINX service on it.
Objectives
In this lab, you will learn how to:
- Deploy a Kubernetes cluster along with a service using Terraform.
K8s Services ?
Services provide important features that are standardized across the cluster: load-balancing, service discovery between applications, and features to support zero-downtime application deployments. Each service has a pod label query which defines the pods which will process data for the service. This label query frequently matches pods created by one or more replication controllers. Powerful routing scenarios are possible by updating a service’s label query via the Kubernetes API with deployment software.
Why Terraform ?
While you could use kubectl or similar CLI-based tools mapped to API calls to manage all Kubernetes resources described in YAML files, orchestration with Terraform presents a few benefits:
- One language - You can use the same configuration language to provision the Kubernetes infrastructure and to deploy applications into it.
- Drift detection - terraform plan will always present you the difference between reality at a given time and the config you intend to apply.
- Full lifecycle management - Terraform doesn’t just initially create resources, but offers a single command to create, update, and delete tracked resources without needing to inspect the API to identify those resources.
- Synchronous feedback - While asynchronous behavior is often useful, sometimes it’s counter-productive as the job of identifying operation results (failures or details of created resource) is left to the user. e.g. you don’t have the IP/hostname of the load balancer until it has finished provisioning, hence you can’t create any DNS record pointing to it.
- Graph of relationships - Terraform understands relationships between resources which may help in scheduling - e.g. Terraform won’t try to create a service in a Kubernetes cluster until the cluster exists.
Understand the Code
- Review the contents of the main.tf file:
- Variables are defined for region, zone, and network_name. These will be used to create the Kubernetes cluster.
- The Google Cloud provider will let us create resources in this project.
- There are several resources defined to create the appropriate network and cluster.
- At the end, there are some outputs which you’ll see after running terraform apply.
- Review the contents of the k8s.tf file:
- The script configures a Kubernetes provider with Terraform and creates the service, namespace and a replication_controller resource.
- The script returns an nginx service IP as an output.
- Initialize and install dependencies
The terraform init command is used to initialize a working directory containing the Terraform configuration files.
This command performs several different initialization steps in order to prepare a working directory for use and is always safe to run multiple times, to bring the working directory up to date with changes in the configuration:
Run terraform init:
Example output:
Run the terraform apply command, which is used to apply the changes required to reach the desired state of the configuration:
Review Terraform’s actions and inspect the resources which will be created. When ready, type yes to begin Terraform actions. On completion, you should see similar output:
Example output:
Verify resources created by Terraform
- In the console, navigate to Navigation menu > Kubernetes Engine.
- Click on tf-gke-k8s cluster and check its configuration.
- In the left panel, click Services & Ingress and check the nginx service status.
- Click the Endpoints IP address to open the Welcome to nginx! page in a new browser tab.
Thanks for your time…
Guneycan Sanli