How to Deploy a Spring Boot Application with Postgres Database on GKE with Ingress GKE Controller: A Step-by-Step Guide

Note: All the source codes of this tutorial will be available on my GitHub repo.
In this tutorial, we will cover the following:
take a look at the spring boot app we are going to deploy
building/publishing the app docker image to the docker hub
why GKE ingress?
how does GKE ingress work?
network endpoint groups
discussing the Kubernetes yaml files
DNS mapping
deploying to the GKE cluster
Letβs dive right in.
Here is the architecture of our deployment:

1- Spring boot application:
let's discuss the code of our spring boot application:
First, the content of our application.yml
server:
port: 8080
servlet:
context-path: /api
spring:
application:
name: spring-boot-with-k8s
datasource:
url: jdbc:postgresql://${DB_HOST}:5432/${DB_NAME}
username: ${POSTGRES_USER}
password: ${POSTGRES_PASSWORD}
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
show-sql: 'true'
flyway:
validate-on-migrate: true
encoding: UTF-8
# configure liveness and readiness probes
management:
endpoint:
health:
probes:
enabled: true
show-details: always
health:
livenessState:
enabled: true
readinessState:
enabled: true
we are just going to discuss the important information in the file.
As we can see, the context path of the application is set to /api, so that we can access the resources as http://host:8080/api/xxx.
The ${DB_HOST}, ${DB_NAME}, ${POSTGRES_USER} and ${POSTGRES_PASSWORD} are environment variables that we are going to pass their value to the container as env.
flyway:
validate-on-migrate: true
We are using Flyway, which is an open-source database migration tool, that helps developers implement automated and version-based database migrations.
and validate-on-migrate: true, means that on application runtime, we will migrate all the SQL files that are present at the path: resources/db/migration/** into the Postgres database.
# configure liveness and readiness probes
management:
endpoint:
health:
probes:
enabled: true
show-details: always
health:
livenessState:
enabled: true
readinessState:
enabled: true
and finally, we are going to expose health probes APIs, which are features of Spring Boot Actuator, which allows the monitoring of the health of the application, specifically, liveness and readiness probes to determine if an application is ready to receive traffic or if it is still alive and functioning properly.
CREATE TABLE "greeting"
(
"id" serial,
"message" varchar not null,
PRIMARY KEY ("id")
);
INSERT INTO "greeting" ("message") VALUES ('Hello World!');
This is the SQL file we are going to execute with Flyway.
@RequestMapping("/greeting")
public String greeting() {
return greetingService.getGreeting();
}
This is the API we are exposing, that will fetch the greeting message from the "greeting" database table.
2- Building/Publishing Docker image to Docker Hub
FROM --platform=linux/amd64 maven:3.8.3-openjdk-17
COPY target/greeting-*.jar /app/
WORKDIR /app
ENV ARG "--server.port=8080"
# Copying the Jar files.
RUN mv /app/greeting-*.jar /app/main.jar
CMD java -jar main.jar $ARG
This is the Dockerfile content that we are using to package the application as a Docker image.
Now, what we need to do is to package the spring boot application as a jar, using the following command:
mvn -DskipTests=true clean install
Then we can build the docker image:
docker build -t soufianeodf/greeting:v1 . you can change "soufianeodf" by your docker hub username.
and then push it to the docker hub registry:
you can log in to your docker hub account first if you are not already.
docker login
then push the image:
docker push soufianeodf/greeting:v1
now our docker image is available publicly to be used.
Note: for simplicity's sake, we are using the docker hub registry, and we are publishing the docker image publicly, but of course, you are free to use whatever registry you want and also the accessibility of the image, public or private.
3- Why GKE Ingress?
When dealing with Kubernetes ingress traffic, using Loadbalancer for each service may not be the best approach. Instead, you can rely on a Kubernetes ingress controller to manage all the traffic for the cluster. By using either direct DNS or wildcard DNS mapping, you can effectively route traffic to backend kubernetes services.
One benefit of using an ingress controller is the ability to attach multiple DNS to a single Loadbalancer and then route to different service backends. Additionally, you can use path-based routing rules in the ingress resources to manage traffic to various Kubernetes services.
The great thing is, GKE has an inbuilt GKE ingress controller which makes setting up an ingress controller a breeze with no additional configuration required. However, if your project has specific requirements or if different features are needed, you can set up other ingress controllers like the Nginx ingress controller.
This tutorial will focus on creating an ingress object using the GKE ingress controller.
4- How Does GKE Ingress Work?
As you are likely aware, a Kubernetes ingress requires an Ingress controller to function properly. In this case, GKE provides its ingress controller, known as the GKE ingress controller.
When you create an ingress object using the GKE ingress controller, a Load Balancer is launched (either Public or Private) with all the routing rules specified in the ingress resource. However, since the Load Balancer is external to the cluster, the backend service defined in the ingress resources must be of the Nodeport type.
This is in contrast to a typical ingress controller implementation, such as the Nginx ingress controller, where the proxy layer is situated inside the cluster and can communicate with services without Nodeport.
5- Network Endpoint Groups
In GKE, there is an important concept known as Network Endpoint Groups (NEGs). Even when the backend service is of the NodePort type, GKE does not simply route traffic to any node within the cluster to reach the pods. Rather, using NEGs, all traffic is sent directly to the nodes where the pods are located.
Without the use of NEGs, traffic from the Load Balancer can be routed to any node within the cluster, resulting in additional network hops before finally reaching the node where the pod resides. This can lead to slower and less efficient routing, highlighting the importance of using NEGs within GKE.
6- Kubernetes yaml files:
Now let's discuss our Kubernetes yaml files:
postgres-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
host: postgres-service # database host
name: greeting # database name
those values are going to override the environment variables we have inside the application.yml of the spring boot app.
postgres-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
type: Opaque
data:
postgres_user: cG9zdGdyZXM= # postgres username encoded in base64 (the original value is postgres)
postgres_password: cm9vdA== # postgres password encoded in base64 (the original value is root)
This is the terminal command that we can use to encrypt the secret keys in base64:
echo -n 'your_secret_key' | base64
postgres-volume.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
app: postgres
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
We are going to use only the PersistentVolumeClaim without specifying any PersistentVolume, as in GKE, google is going to create it automatically for us when not specified.
As we can see in the Google documentation:
PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims, or they can be explicitly created by a cluster administrator.
GKE creates a default StorageClass for you which uses the balanced persistent disk type (ext4). The default StorageClass is used when a PersistentVolumeClaim doesn't specify a StorageClassName
java-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: greeting-deployment
spec:
replicas: 2
selector:
matchLabels:
app: greeting
template:
metadata:
labels:
app: greeting
spec:
containers:
- name: greeting
image: soufianeodf/greeting:v1
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /api/actuator/health/readiness
port: 8080
initialDelaySeconds: 180
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/actuator/health/liveness
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
ports:
- containerPort: 8080
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: postgres-config # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: postgres-config
key: name
- name: POSTGRES_USER # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: postgres-secrets # Secret Name
key: postgres_user
- name: POSTGRES_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: postgres-secrets
key: postgres_password
postgres-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
tier: database
spec:
replicas: 1
selector:
matchLabels:
app: postgres
strategy:
type: Recreate
template:
metadata:
labels:
app: postgres
tier: database
spec:
containers:
- name: postgres
image: postgres:13
imagePullPolicy: Always
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: postgres_password
- name: POSTGRES_DB # Setting Database Name from a 'ConfigMap'
valueFrom:
configMapKeyRef:
name: postgres-config
key: name
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data/pgdata
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
java-service.yml
apiVersion: v1
kind: Service
metadata:
name: greeting-service
spec:
type: NodePort
selector:
app: greeting
ports:
- name: http
port: 80
targetPort: 8080
Note: For GKE ingress to work, the service type has to be NodePort. It is a requirement.
postgres-service.yml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
app: postgres
tier: database
spec:
selector:
app: postgres
tier: database
ports:
- name: postgres
port: 5432
targetPort: 5432
type: ClusterIP
java-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: "ingress-webapps"
spec:
rules:
- host: "<your-custom-domaine.com>"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: greeting-service
port:
number: 80
There is a couple of things we need to discuss here:
1-kubernetes.io/ingress.class annotation with value gce tells GKE to create a public Load Balancer. If you donβt specify it, it defaults to public only.
2- To use a custom domain name to access the application, the IP of the load balancer should be static, so for that, we need to reserve a global static IP address and specify its name in the kubernetes.io/ingress.global-static-ip-name annotation in the Ingress YAML file, using the following command in the google cloud shell:
gcloud compute addresses create ingress-webapps --global
after the creation, we can see the assigned static IP:

3- In our ingress yml file, the pathType is set to Prefix, which means that the Ingress should match any URL path that starts with the specified path, and route all the traffic to the greeting-service endpoint.
7- DNS mapping
For this, you should have already a domain name, if not you can purchase one.
For this example, we are going to take as an example the domain name: example.com
In your Google Cloud project, you can look for Cloud DNS and then click on CREATE ZONE, you should have something like the following:

after that, you can click on the created zone, add an A record and set the previous static IP created.

8- Deploying to GKE cluster:
Now you can go ahead and create a GKE cluster:

I will be creating it using the autopilot mode.
Then you can go to the cloud shell, and clone the project:
git clone https://github.com/soufianeodf/deploy-spring-boot-with-k8s.git
cd deploye-spring-boot-with-k8s/k8s # get inside the k8s folder
kubectl apply --recursive -f ./ # execute all yml files recursively
you can keep watching for the pods in watch mode until everything is ready:
kubectl get pods -w

same thing for the ingress:
kubectl get ingress -w
the IP will take a couple of minutes to appear, and at the end, you will get something like that:

and you need to check also that the health check in the created load balancer is ok:

Finally, you can access the application using the domain name, by taping the following URL into a browser:
http://your-domaine/api/greeting

Congratulations π₯³π
Don't forget to delete the cluster at the end, and also the static IP created.
Conclusion:
The article covers the deployment of a Spring Boot application using Docker and Kubernetes, specifically on the Google Kubernetes Engine (GKE). The process involves building and publishing a Docker image of the application to Docker Hub, using GKE Ingress for routing traffic to the application, Network Endpoint Groups (NEGs) for load balancing, and deploying the application to the GKE cluster using Kubernetes YAML files.
The article also explains how GKE Ingress works and how DNS mapping is used to map domain names to the deployed application. Overall, the article provides a comprehensive guide to deploying a Spring Boot application to a GKE cluster using Kubernetes and Docker.
I would be thrilled to hear your thoughts and opinions on the topic. ππ So, please don't hesitate to leave a comment below and share your insights with me. π¬π‘