I. Introduction

In this tutorial I will show you how to write a small Spring Boot CRUD application and how to deploy it on Kubernetes.

Spring Boot is an innovative project that aims to make it easy to create Spring applications by simplifying the configuration and deployment actions through its convention over configuration based setup.

Kubernetes (commonly referred to as “K8s”) is an open-source system for automating deployment, scaling and management of containerized applications that was originally designed by Google and now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.

Docker is an open source project that automates the deployment of applications inside software containers.

This tutorial is a getting started point to the Spring Boot & K8s Stack.

II. Writing the application

1. Creating the skull

Here we go ! We will start using the great features and options offered by the Spring Framework Teams at Pivotal Software, Inc.

To avoid the hard parts when creating new project and getting it started, the Spring Team has created the Spring Initializr Project.

The Spring Initializr is a useful project that can generate a basic Spring Boot project structure easily. You can choose either your project to be based on Maven or Gradle, to use Java or Kotlin or Groovy, and to choose which version of Spring Boot you want to pick.

Spring Initializr can be used:

  • Using a Web-based Interface http://start.spring.io
  • Using Spring Tool Suite or other different IDEs like NetBeans & IntelliJ
  • Using the Spring Boot CLI

Spring Initializr gives you the ability to add the core dependencies that you need, like JDBC drivers or Spring Boot Starters.

⚠️ Hey! What is Spring Boot Starters?

Spring Boot Starters are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need, without having to hunt through sample code and copy paste loads of dependency descriptors. For example, if you want to get started using Spring and JPA for database access, just include the spring-boot-starter-data-jpa dependency in your project,and you are good to go.

The starters contain a lot of the dependencies that you need to get a project up and running quickly and with a consistent, supported set of managed transitive dependencies.

ℹ️ Note

You can learn more about Spring Boot Starters here: https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#using-boot-starter

For our case we will be using:

  • Maven
  • Java 8
  • Spring Boot 1.5.x

And for the Dependencies we will choose:

  • JPA
  • H2
  • Rest Repositories
  • Actuator
  • Lombok

Spring Initializr - Bootstrapping MySchool

The resulting pom will be like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.onepoint.labs</groupId>
    <artifactId>myschool</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>MySchool</name>
    <description>Demo project for Spring Boot</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.10.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>

        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>

</project>

Now, we need to add the embedded databse configuration to the application.properties:

1
2
3
4
5
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.hibernate.ddl-auto=create
spring.datasource.url=jdbc:h2:mem:boutique;DB_CLOSE_DELAY=-1
spring.datasource.username=sa
spring.datasource.password=

Next, we will start the implementation of our Java components.

2. Presenting the domain

The domain of our application will just contain only one entity: Student.

Domain: the Student Entity

3. Implementing the domain

The Student entity will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@Entity
@Data
public class Student {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    private String name;
    private String family;

}
  • @Data generates all the boilerplate that is normally associated with simple POJOs (Plain Old Java Objects) and beans: getters for all fields, setters for all non-final fields, and appropriate toString, equals and hashCode implementations.

Next, we will implement the JPA Repository:

1
2
3
4
5
6
7
@RepositoryRestResource(collectionResourceRel = "students", path = "students")
public interface StudentRepository extends PagingAndSortingRepository<Student, Long> {

    List<Student> findByName(String name);

    List<Student> findByNameAndFamily(String name, String family);
}
  • We are using this annotation to customize the REST endpoint.
  • We are extending the PagingAndSortingRepository Interface to get the paging & sorting features.
  • We are implementing custom JPA Repository methods using the special Spring Data Query DSL.

4. Adding the Swagger 2 Capabilities

Swagger 2 is an open source project used to describe and document RESTful APIs. Swagger 2 is language-agnostic and is extensible into new technologies and protocols beyond HTTP. The current version defines a set HTML, JavaScript and CSS assets to dynamically generate documentation from a Swagger-compliant API. These files are bundled by the Swagger UI project to display the API on browser. Besides rendering documentation, Swagger UI allows other API developers or consumers to interact with the API’s resources without having any of the implementation logic in place.

The Swagger 2 specification, which is known as OpenAPI specification has several implementations. We will be using the Springfox implementation in our project.

To enable the Swagger capabilities to our project, we will be:

  • Adding the Maven Dependencies
  • Adding the Java Configuration

The Maven Dependencies:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<!--DEPENDENCIES FOR SWAGGER DOCUMENTATION-->
<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-data-rest</artifactId>
    <version>2.8.0</version>
</dependency>

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger-ui</artifactId>
    <version>2.8.0</version>
</dependency>

<dependency>
    <groupId>io.springfox</groupId>
    <artifactId>springfox-swagger2</artifactId>
    <version>2.8.0</version>
</dependency>

For the Java Configuration, we need to create a Spring @Bean that configure Swagger in the application context.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
@Configuration
public class SwaggerConfig {

    @Bean
    public Docket productApi() {
        return new Docket(DocumentationType.SWAGGER_2).select()
                .apis(RequestHandlerSelectors.any())
                .paths(PathSelectors.any())
                .build();
    }
}

Also, we need to add the @EnableSwagger2 annotation to the SpringBootApplication Main Class. We also have to add the @Import({springfox.documentation.spring.data.rest.configuration.SpringDataRestConfiguration.class}) annotation, that enables the Springfox implementation to scan and parse the generated methods by Spring Data REST.

5. Run it !

To run the Spring Boot Application, you just run: mvn spring-boot:run

The application will be running on the 8080 port. To access the application, just go to http://localhost:8080, you will see this:

Landing Page: HATEOS json code listing REST services

We will enjoy the game now: we will be using Swagger forms to insert some records.

To access the Swagger UI, just go to http://localhost:8080/swagger-ui.html#/

Swagger UI

We will be using the Student Entity menu → POST operation called saveStudentTry it out → in the form just mention the name and the family data that you want to store. For example:

1
2
3
4
{
  "family": "Lamouchi",
  "name": "Nebrass"
}

The screen will be like this:

Swagger POST Form: New Student

Just hit Execute and the Responde body will be somthing like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "name": "Nebrass",
  "family": "Lamouchi",
  "_links": {
    "self": {
      "href": "http://localhost:8080/students/1"
    },
    "student": {
      "href": "http://localhost:8080/students/1"
    }
  }
}

And the Response headers will look like this:

1
2
3
4
5
content-type: application/hal+json;charset=UTF-8
date: Sun, 04 Mar 2018 22:05:55 GMT
location: http://localhost:8080/students/1
transfer-encoding: chunked
x-application-context: application:local

We can use the findAllStudents menu of the Swagger UI to list all the students in the DB, to be sure that the record has been successfully created:

The Response body of the findAll operation will look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "_embedded": {
    "students": [
      {
        "name": "Nebrass",
        "family": "Lamouchi",
        "_links": {
          "self": {
            "href": "http://localhost:8080/students/1"
          },
          "student": {
            "href": "http://localhost:8080/students/1"
          }
        }
      }
    ]
  },
  "_links": {
    "self": {
      "href": "http://localhost:8080/students{?page,size,sort}",
      "templated": true
    },
    "profile": {
      "href": "http://localhost:8080/profile/students"
    },
    "search": {
      "href": "http://localhost:8080/students/search"
    }
  },
  "page": {
    "size": 20,
    "totalElements": 1,
    "totalPages": 1,
    "number": 0
  }
}

💡 Tip

You can simply go to http://localhost:8080/students and you will get all the students persisted to the DB.

III. Moving to the Containers Era

To deploy our (so huge, so big) application 😂, we will be using Docker. We will deploy our code in a container so we can enjoy the great feature provided by Docker.

Docker has become the standard to develop and run containerized applications.

This is great ! Using Docker is quitely simple, especially in development stages. Deploying containers in the same server (docker-machine) is simple, but when start thinking to deploy many containers to many servers, things become complicated (managing the servers, managing the container state, etc.​).

Here come the orchestration system, which provides many features like orchestrating computing, networking and storage infrastructure on behalf of user workloads

  • Scheduling: matching containers to machines based on many factors like resources needs, affinity requirements…​
  • Replications
  • Handling failures
  • Etc…​

For our tutorial, we will choose Kubernetes, the star of container orchestration.

1. What is Kubernetes ?

Kubernetes (Aka K8s) was a project spun out of Google as a open source next-gen container scheduler designed with the lessons learned from developing and managing Borg and Omega.

Kubernetes is designed to have loosely coupled components centered around deploying, maintaining and scaling applications. K8s abstracts the underlying infrastructure of the nodes and provides a uniform layer for the deployed applications.

1.1 Kubernetes Architecture

In the big plan, a Kubernetes cluster is composed of two items:

  • Master Nodes: The main control plane for Kubernetes. It contains an API Server, a Scheduler, a Controller Manager (K8s cluster manager) and a datastore to save the cluster state called Etcd.
  • Worker Nodes: A single host, physical or virual machine, capable of running POD. They are managed by the Master nodes.

Kubernetes Architecture Overview

Let’s have a look inside a Master node:

  • (Kube) API-Server: allows the communication, thru REST APIs, between the Master node and all its clients such as Worker Nodes, kube-cli, …​
  • (Kube) Scheduler: a policy-rich, topology-aware, workload-specific function that significantly impacts availability, performance, and capacity to assign a Node to a newly created POD.
  • (Kube) Controller Manager: a daemon that embeds the core control loops shipped with Kubernetes. A control loop is a permenent listener that regulates the state of the system. In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the API-Server and makes changes attempting to move the current state towards the desired state.
  • Etcd: a strong, consistent and highly available key-value store used for persisting the cluster state.

Then, what about a Worker node?

  • Kubelet: an agent that runs on each node in the cluster. It makes sure that containers are running in a POD.
  • Kube-Proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and perform networking actions.

💡 Tip

The Container Runtime that we will user is Docker. Kubernetes is compatible with many others like Cri-o, Rkt, …​

1.2 Kubernetes Core Concepts

The K8s ecosystem covers many concepts and components. We will try to introduce them briefly.

Kubectl

The kubectl is a command line interface for running commands against Kubernetes clusters.

Cluster

A collection of hosts that aggregate their resources (CPU, Ram, Disk, …​) into a usable pool.

Namespace

A logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster.

List all Namespace:

1
$ kubectl get namespace # or `kubectl get ns`
Label

Key-value pairs that are used to identify and select related sets of objects. Labels have a strict syntax and defined character set.

Annotation

Key-value pairs that contain non-identifying information or metadata. Annotations do not have the the syntax limitations as labels and can contain structured or unstructured data.

Selector

Selectors use labels to filter or select objects. Both equality-based (=, ==, !=) or simple key-value matching selectors are supported.

Use case of Annotations, Labels and Selectors.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  annotations:
    description: "nginx frontend"
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
Pod

It is the basic unit of work for Kubernetes. Represent a collection of containers that share resources, such as IP addresses and storage.

Pod Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']

To list all Pods:

1
$ kubectl get pod # or `kubectl get po`
ReplicationController

A framework for defining pods that are meant to be horizontally scaled. A replication controller includes a pod definition that is to be replicated, and the pods created from it can be scheduled to different nodes.

ReplicationController Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

List all ReplicationControllers:

1
$ kubectl get replicationcontroller # or `kubectl get rc`
ReplicaSet

An upgraded version of ReplicationController that supports set-based selectors.

ReplicaSet Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
    matchExpressions:
      - {key: tier, operator: In, values: [frontend]}
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3
        env:
        - name: GET_HOSTS_FROM
          value: dns
        ports:
        - containerPort: 80

List all ReplicaSets:

1
$ kubectl get replicaset # or `kubectl get rs`
Deployment

Includes a Pod template and a replicas field. Kubernetes will make sure the actual state (amount of replicas, Pod template) always matches the desired state. When you update a Deployment it will perform a “rolling update”.

Deployment Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

List all Deployments:

1
$ kubectl get deployment
StatefulSet

A controller tthat aims to manage Pods that must persist or maintain state. Pod identity including hostname, network, and storage will be persisted.

StatefulSet Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class"
      resources:
        requests:
          storage: 1Gi

List all StatefulSets:

1
$ kubectl get statefulset
DaemonSet

Ensures that an instance of a specific pod is running on all (or a selection of) nodes in a cluster.

DaemonSet Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: gcr.io/google-containers/fluentd-elasticsearch:1.20
      terminationGracePeriodSeconds: 30

List all DaemonSets:

1
$ kubectl get daemonset # or `kubectl get ds`
Service

Define a single IP/port combination that provides access to a pool of pods. It uses label selectors to map groups of pods and ports to a cluster-unique virtual IP.

Service Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx

List all Services:

1
$ kubectl get service # or `kubectl get svc`
Ingress

An ingress controller is the primary method of exposing a cluster service (usually http) to the outside world. These are load balancers or routers that usually offer SSL termination, name-based virtual hosting etc…​

Ingress Example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80

List all Ingress:

1
$ kubectl get ingress
Volume

Storage that is tied to the Pod Lifecycle, consumable by one or more containers within the pod.

PersistentVolume

A PersistentVolume (PV) represents a storage resource. PVs are commonly linked to a backing storage resource, NFS, GCEPersistentDisk, RBD etc. and are provisioned ahead of time. Their lifecycle is handled independently from a pod.

List all PersistentVolumes:

1
$ kubectl get persistentvolume` or `kubectl get pv`
PersistentVolumeClaim

A PersistentVolumeClaim (PVC) is a request for storage that satisfies a set of requirements. Commonly used with dynamically provisioned storage.

List all PersistentVolumeClaims:

1
$ kubectl get persistentvolumeclaim` or `kubectl get pvc`
StorageClass

Storage classes are an abstraction on top of an external storage resource. These will include a provisioner, provisioner configuration parameters as well as a PV reclaimPolicy.

List all StorageClasses:

1
$ kubectl get storageclass` or `kubectl get sc`
Job

The job controller ensures one or more pods are executed and successfully terminates. It will do this until it satisfies the completion and/or parallelism condition.

List all Jobs:

1
$ kubectl get job`
CronJob

An extension of the Job Controller, it provides a method of executing jobs on a cron-like schedule.

List all CronJobs:

1
$ kubectl get cronjob`
ConfigMap

Externalized data stored within Kubernetes that can be referenced as a commandline argument, environment variable or injected as a file into a volume mount. Ideal for implementing the External Configuration Store pattern.

List all ConfigMaps:

1
$ kubectl get configmap` or `kubectl get cm`
Secret

Functionally identical to ConfigMaps, but stored encoded as base64, and encrypted at rest (if configured).

List all Secrets:

1
$ kubectl get secret`

2. Run Kubernetes locally

For our tutorial we will not build a real Kubernetes Cluster. We will use Minikube.

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

For Minikube installation : https://github.com/kubernetes/minikube

After the installation, to start Minikube:

1
$ minikube start

The minikube start command creates a kubectl context called minikube. This context contains the configuration to communicate with your minikube cluster.

Minikube sets this context to default automatically, but if you need to switch back to it in the future, run:

kubectl config use-context minikube

To access the Kubernetes Dashboard:

1
$ minikube dashboard

The Dashboard will be opened in your default browser:

Kubernetes Dashboard (Web UI)

The minikube stop command can be used to stop your cluster. This command shuts down the minikube virtual machine, but preserves all cluster state and data. Starting the cluster again will restore it to it’s previous state.

The minikube delete command can be used to delete your cluster. This command shuts down and deletes the minikube virtual machine. No data or state is preserved.

3. Refactoring the application

Now we want to move from H2 Databse to PostgreSQL. So we have to configure the application to use it by mentioning some properties like JDBC driver, url, username, password…​ in the application.properties file and to add the PostgreSQL JDBC Driver in the pom.xml.

First of all, we start by adding this dependency to our pom.xml:

1
2
3
4
5
<dependency>
    <groupId>org.postgresql</groupId>
    <artifactId>postgresql</artifactId>
    <scope>runtime</scope>
</dependency>

Next, we do some modifications to the application.properties:

1
2
3
4
5
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.hibernate.ddl-auto=create
spring.datasource.url=jdbc:postgresql://${POSTGRES_SERVICE}:5432/${POSTGRES_DB_NAME}
spring.datasource.username=${POSTGRES_DB_USER}
spring.datasource.password=${POSTGRES_DB_PASSWORD}

We have used environment properties placeholders:

  • POSTGRES_SERVICE : Host of PostgreSQL DB Server
  • POSTGRES_DB_NAME : PostgreSQL DB Name
  • POSTGRES_DB_USER : PostgreSQL Username
  • POSTGRES_DB_PASSWORD : PostgreSQL Password

We will extract these values from a Kubernetes ConfigMap and Secret objects.

Create the ConfigMap

We need to create the ConfigMap:

1
2
3
$ kubectl create configmap postgres-config \
	--from-literal=postgres.service.name=postgresql \
	--from-literal=postgres.db.name=boutique

We can check the created ConfigMap:

1
$ kubectl get cm postgres-config -o json

The output will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
    "apiVersion": "v1",
    "data": {
        "postgres.db.name": "boutique",
        "postgres.service.name": "postgresql"
    },
    "kind": "ConfigMap",
    "metadata": {
        "creationTimestamp": "2018-03-25T16:42:39Z",
        "name": "postgres-config",
        "namespace": "default",
        "resourceVersion": "195",
        "selfLink": "/api/v1/namespaces/default/configmaps/postgres-config",
        "uid": "87d7481c-304b-11e8-889d-080027a8a37c"
    }
}

Create the Secret

Next we create the Secret:

1
2
3
$ kubectl create secret generic db-security \
	--from-literal=db.user.name=nebrass \
	--from-literal=db.user.password=password

We can check the created Secret:

1
$ kubectl get secret db-security -o json

The output will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{
    "apiVersion": "v1",
    "data": {
        "db.user.name": "bmVicmFzcw==",
        "db.user.password": "cGFzc3dvcmQ="
    },
    "kind": "Secret",
    "metadata": {
        "creationTimestamp": "2018-03-25T16:56:36Z",
        "name": "db-security",
        "namespace": "default",
        "resourceVersion": "714",
        "selfLink": "/api/v1/namespaces/default/secrets/db-security",
        "uid": "7ac96df3-304d-11e8-889d-080027a8a37c"
    },
    "type": "Opaque"
}
  • The credentials are encoded as base64. This is to protect the secret from being exposed accidentally to someone looking or from being stored in a terminal log.
  • From kubernetes’s point of view the contents of this Secret is unstructured: it can contain arbitrary key-value pairs.

Deploy PostgreSQL to Kubernetes

As the configuration is centralized and stored in the Kubernetes Cluster, we can share them between the Spring Boot Application and the PostgreSQL Service that we will create now.

I already prepared the PostgreSQL resource file, in the src/main/assets/. This YAML file contains a Deployment and a Service resources.

We loaded the properties from our ConfigMap and Secret.

The content of postgres.yml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgresql
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: postgresql
    spec:
      volumes:
      - name: data
        emptyDir: {}
      containers:
      - name: postgres
        image: postgres:9.6.5
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: db-security
              key: db.user.name
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-security
              key: db.user.password
        - name: POSTGRES_DB
          valueFrom:
            configMapKeyRef:
              name: postgres-config
              key: postgres.db.name
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  namespace: default
spec:
  selector:
    app: postgresql
  ports:
  - port: 5432
  • We will use the postgres:9.6.5 image
  • The env block is used to load data in the container environment.
  • Create an environment variable with a value loaded from a key called db.user.name in the secret called db-security.
  • Create an environment variable with a value loaded from a key called db.user.password in the secret called db-security.
  • Create an environment variable with a value loaded from a key called postgres.db.name in the configMap called postgres-config.

To apply this resource file to Kubernetes, we can do:

1
$ kubectl create -f src/main/assets/postgres.yml

The output will be:

1
2
deployment "postgresql" created
service "postgresql" created

We can check the created Deployment:

1
$ kubectl get deployment postgresql -o json

The output will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
{
    "apiVersion": "extensions/v1beta1",
    "kind": "Deployment",
    "metadata": {
        "annotations": {
            "deployment.kubernetes.io/revision": "1"
        },
        "creationTimestamp": "2018-03-25T17:04:03Z",
        "generation": 1,
        "labels": {
            "app": "postgresql"
        },
        "name": "postgresql",
        "namespace": "default",
        "resourceVersion": "1025",
        "selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/postgresql",
        "uid": "84fa2f1a-304e-11e8-889d-080027a8a37c"
    },
    "spec": {
        ...
    },
    "status": {
        "availableReplicas": 1,
        "conditions": [
            {
                "lastTransitionTime": "2018-03-25T17:04:03Z",
                "lastUpdateTime": "2018-03-25T17:04:03Z",
                "message": "Deployment has minimum availability.",
                "reason": "MinimumReplicasAvailable",
                "status": "True",
                "type": "Available"
            }
        ],
        "observedGeneration": 1,
        "readyReplicas": 1,
        "replicas": 1,
        "updatedReplicas": 1
    }
}

We can check the created Service:

1
$ kubectl get service postgresql -o json

The output will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
{
    "apiVersion": "extensions/v1beta1",
    "kind": "Deployment",
    "metadata": {
        "annotations": {
            "deployment.kubernetes.io/revision": "1"
        },
        "creationTimestamp": "2018-03-25T17:04:03Z",
        "generation": 1,
        "labels": {
            "app": "postgresql"
        },
        "name": "postgresql",
        "namespace": "default",
        "resourceVersion": "1025",
        "selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/postgresql",
        "uid": "84fa2f1a-304e-11e8-889d-080027a8a37c"
    },
    "spec": {
        ...
    },
    "status": {
        "availableReplicas": 1,
        "conditions": [
            {
                "lastTransitionTime": "2018-03-25T17:04:03Z",
                "lastUpdateTime": "2018-03-25T17:04:03Z",
                "message": "Deployment has minimum availability.",
                "reason": "MinimumReplicasAvailable",
                "status": "True",
                "type": "Available"
            }
        ],
        "observedGeneration": 1,
        "readyReplicas": 1,
        "replicas": 1,
        "updatedReplicas": 1
    }
}

In this output, there is something interesting: the port and the target port:

  • The port this service will be available on
  • The container port the service will forward to

We already mentionned the port in the spring.datasource.url property.

- What do you think if we use the powerful features of Kubernetes to resolve this port dynamically?

- Ok but how? :)

After creating these resources, the effective properties will like this :

1
2
3
4
5
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.hibernate.ddl-auto=create
spring.datasource.url=jdbc:postgresql://postgresql:5432/boutique
spring.datasource.username=nebrass
spring.datasource.password=password

The Datasource URL is pointing to a host called postgresql. The resolution of the hostname to IP is done by Kubernetes.

If we check the postgresql service:

1
2
3
4
$ kubectl get svc postgresql

NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
postgresql   ClusterIP   10.111.244.143   <none>        5432/TCP   1h

There is an other cool feature in Kubernetes, we can fetch data related to the service itself. We can for example get the port associated to this service.

For example:

  • ${postgresql.service.host} will be resolved to 10.111.244.143.
  • ${postgresql.service.port} will be resolved to 5432.

We can do it better ^^) we can merge the environment variables in the great holders that will be resolved by Kubernetes. They will become:

  • ${postgresql.service.host} can be written ${${POSTGRES_SERVICE}.service.host}
  • ${postgresql.service.port} can be written ${${POSTGRES_SERVICE}.service.port}

In this way, the internal placeholder will be resolved by the environment variable provided by the ConfigMap, and the external placeholder will be resolved by Kubernetes.

The resulting application.properties will look like:

1
2
3
4
5
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.hibernate.ddl-auto=create
spring.datasource.url=jdbc:postgresql://${${POSTGRES_SERVICE}.service.host}:${${POSTGRES_SERVICE}.service.port}/${POSTGRES_DB_NAME}
spring.datasource.username=${POSTGRES_DB_USER}
spring.datasource.password=${POSTGRES_DB_PASSWORD}

Now, we are ConfigMaps addicts :) we will host our application.properties in a ConfigMap in Kubernetes. To do it, just do:

1
2
$ kubectl create configmap app-config \
	--from-file=src/main/resources/application.properties

Now that the application.properties, how can our Spring Boot application use them?

The answer is so easy: the Spring Cloud Kubernetes plugin.

What is Spring Cloud Kubernetes

The Spring Cloud Kubernetes plug-in implements the integration between Kubernetes and Spring Boot. It provides access to the configuration data of a ConfigMap using the Kubernetes API.

It make so easy to integrate Kubernetes ConfigMap directly with the Spring Boot externalized configuration mechanism, so that Kubernetes ConfigMaps behave as an alternative property source for Spring Boot configuration.

To enable the great features of the plugin:

  1. Add the Maven Dependency:Add this dependency to the pom.xml:

    1
    2
    3
    4
    5
    
    <dependency>
        <groupId>io.fabric8</groupId>
        <artifactId>spring-cloud-starter-kubernetes</artifactId>
        <version>0.1.6</version>
    </dependency>
    
  2. Create the Bootstrap file:Create a new file bootstrap.properties under src/main/resources:

    1
    2
    
    spring.application.name=${project.artifactId}
    spring.cloud.kubernetes.config.name=app-config
    
    • The ${project.artifactId} will be parsed and populated by maven-resources-plugin, that you will find in the pom.xml of the sample project hosted in the Github repository of this tutorial.
    • The name of the ConfigMap where we stored our great application.properties.

That’s it! The Spring Cloud Kubernetes is correctly integrated to our application. When we deploy our application to Kubernetes, it will use the application.properties stored in the ConfigMap app-config.

You say deploy? Ok but how to do it?

Deploy it to Kubernetes

The deployment?! A dedicated full story, that can have many chapters. But we will try to keep it short and simple.

By definition, Kubernetes is a container orchestration solution. So deploying an application to Kubernetes means :

  • Containerizing the application: creating an image embedding the application.
  • Preparing the deployment resources (Deployment, ReplicaSet, etc…​).
  • Deploying the container to Kubernetes.

These steps can take some time to be done, even if we try to automate this process, it will take us long time to implement it, and it will take more time to cover all the cases and variants of the apps.

As these tasks are so heavy, we need some tool that do all of this for easy.

Here comes the super powerfull tool: Fabric8-Maven-Plugin.

Fabric8 Logo

Fabric8-Maven-Plugin is a one-stop-shop for building and deploying Java applications for Docker, Kubernetes and OpenShift. It brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into maven and benefits from the build configuration already provided. It focuses on three tasks:

  • Building Docker images
  • Creating OpenShift and Kubernetes resources
  • Deploy application on Kubernetes and OpenShift

The plugin will do all the heavy tasks ! Yes ! He will ! :)

It can be configured very flexibly and supports multiple configuration models for creating:

  • Zero Configuration for a quick ramp-up where opinionated defaults will be pre-selected.
  • Inline Configuration within the plugin configuration in an XML syntax.
  • External Configuration templates of the real deployment descriptors which are enriched by the plugin.
  • Docker Compose Configuration provide Docker Compose file and bring up docker compose deployments on a Kubernetes/OpenShift cluster.

To enable fabric8-maven-plugin on your project just add this to the plugins sections of your pom.xml:

1
2
3
4
5
<plugin>
  <groupId>io.fabric8</groupId>
  <artifactId>fabric8-maven-plugin</artifactId>
  <version>3.5.38</version>
</plugin>

Now in order to use fabric8-maven-plugin to build or deploy, make sure you have a Kubernetes cluster up and running.

The fabric8-maven-plugin supports a rich set of goals for providing a smooth Java developer experience. You can categorize these goals as follows:

  • Build goals are used to create and manage the Kubernetes build artifacts like Docker images.
    • fabric8:build : Build Docker images
    • fabric8:resource : Create Kubernetes resource descriptors
    • fabric8:push : Push Docker images to a registry
    • fabric8:apply : Apply resources to a running cluster
  • Development goals are used in deploying resource descriptors to the development cluster.
    • fabric8:run : Run a complete development workflow cycle fabric8:resourcefabric8:buildfabric8:apply in the foreground.
    • fabric8:deploy : Deploy resources descriptors to a cluster after creating them and building the app. Same as fabric8:run except that it runs in the background.
    • fabric8:undeploy : Undeploy and remove resources descriptors from a cluster.
    • fabric8:watch : Watch for doing rebuilds and restarts

If you want to integrate the goals in the maven lifecycle phases, you can do it easily:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
<plugin>
  <groupId>io.fabric8</groupId>
  <artifactId>fabric8-maven-plugin</artifactId>
  <version>3.5.38</version>

  <!-- This block will connect fabric8:resource and fabric8:build to lifecycle phases -->
  <executions>
    <execution>
       <id>fmp</id>
       <goals>
         <goal>resource</goal>
         <goal>build</goal>
       </goals>
    </execution>
  </executions>
</plugin>

ℹ️ Note

For lazyness purposes :p I will be referencing the fabric8-maven-plugin as f8mp.

⚠️ Warning

The f8mp needs to access to the Docker environment of Minikube, to do this just start the command eval $(minikube docker-env) . Without this command, Kubernetes will not find the Docker images built by f8mp.

Now when we do mvn clean install for example, the plugin will build the docker images and will generate the Kubernetes resource descriptors in the ${basedir}/target/classes/META-INF/fabric8/kubernetes directory.

Let’s check the generated resource descriptors.

⚠️ Warning

Wait ! Wait! We said that we will pass the ConfigMaps to the Spring Boot application. Where is that?!

Yep! Before generating our resources descriptors, we have to tell this to f8mp.

f8mp has an easy way to do this: the plugin can handle some Resource Fragments. It’s a piece of YAML code located in the src/main/fabric8 directory. Each resource get is own file, which contains some skeleton of a resource description. The plugin will pick up the resource, enriches it and then combines all the data. Within these descriptor files you are can freely use any Kubernetes feature.

In our case, we will deliver in the Resource Fragment the configuration of the environment variables to the Pod, where our Spring Boot Application will be executed. We will use a fragment of a Deployment, which will look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ${project.artifactId}
  namespace: default
spec:
  template:
    spec:
      containers:
        - name: ${project.artifactId}
          env:
          - name: POSTGRES_SERVICE
            valueFrom:
              configMapKeyRef:
                name: postgres-config
                key: postgres.service.name
          - name: POSTGRES_DB_NAME
            valueFrom:
              configMapKeyRef:
                name: postgres-config
                key: postgres.db.name
          - name: POSTGRES_DB_USER
            valueFrom:
              secretKeyRef:
                name: db-security
                key: db.user.name
          - name: POSTGRES_DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: db-security
                key: db.user.password
  • The name of our Deployment and the container.
  • The environment variables that we are creating and populating from the ConfigMap and Secret.

Now, when the f8mp will try to generate the resources descriptors, it will find this resource fragment, combine it with the other data. The resulting output will be coherent with the fregment that we already provided.

Let’s try it. Just run mvn clean install:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building MySchool 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
...
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.38:resource (fmp) @ myschool -
[INFO] F8: Running in Kubernetes mode
[INFO] F8: Running generator spring-boot
[INFO] F8: spring-boot: Using Docker image fabric8/java-jboss-openjdk8-jdk:1.3 as base / builder
[INFO] F8: using resource templates from /Users/n.lamouchi/MySchool/src/main/fabric8
[INFO] F8: fmp-service: Adding a default service 'myschool' with ports [8080]
[INFO] F8: spring-boot-health-check: Adding readiness probe on port 8080, path='/health', scheme='HTTP', with initial delay 10 seconds
[INFO] F8: spring-boot-health-check: Adding liveness probe on port 8080, path='/health', scheme='HTTP', with initial delay 180 seconds
[INFO] F8: fmp-revision-history: Adding revision history limit to 2
[INFO] F8: f8-icon: Adding icon for deployment
[INFO] F8: f8-icon: Adding icon for service
[INFO] F8: validating /Users/n.lamouchi/MySchool/target/classes/META-INF/fabric8/openshift/myschool-svc.yml resource
[INFO] F8: validating /Users/n.lamouchi/MySchool/target/classes/META-INF/fabric8/openshift/myschool-deploymentconfig.yml resource
[INFO] F8: validating /Users/n.lamouchi/MySchool/target/classes/META-INF/fabric8/openshift/myschool-route.yml resource
[INFO] F8: validating /Users/n.lamouchi/MySchool/target/classes/META-INF/fabric8/kubernetes/myschool-deployment.yml resource
[INFO] F8: validating /Users/n.lamouchi/MySchool/target/classes/META-INF/fabric8/kubernetes/myschool-svc.yml resource
[INFO]
...
[INFO]
[INFO] --- fabric8-maven-plugin:3.5.38:build (fmp) @ myschool -
[INFO] F8: Building Docker image in Kubernetes mode
[INFO] F8: Running generator spring-boot
[INFO] F8: spring-boot: Using Docker image fabric8/java-jboss-openjdk8-jdk:1.3 as base / builder
[INFO] Copying files to /Users/n.lamouchi/MySchool/target/docker/nebrass/myschool/snapshot-180327-174802-0575/build/maven
[INFO] Building tar: /Users/n.lamouchi/MySchool/target/docker/nebrass/myschool/snapshot-180327-174802-0575/tmp/docker-build.tar
[INFO] F8: [nebrass/myschool:snapshot-180327-174802-0575] "spring-boot": Created docker-build.tar in 283 milliseconds
[INFO] F8: [nebrass/myschool:snapshot-180327-174802-0575] "spring-boot": Built image sha256:61171
[INFO] F8: [nebrass/myschool:snapshot-180327-174802-0575] "spring-boot": Tag with latest
[INFO]
...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 30.505 s
[INFO] Finished at: 2018-03-27T17:48:29+02:00
[INFO] Final Memory: 70M/721M
[INFO] ------------------------------------------------------------------------
  • Generating the resources descriptors based on the detected configuration: Spring Boot application that is using the port 8080 with existing Actuator endpoints.
  • Building the Docker image in Kubernetes mode (locally and not like the Openshift mode, which uses the Openshift S2I Mechanism for the build).

After building our project, we got these files in the ${basedir}/target/classes/META-INF/fabric8/kubernetes directory:

  • my-school-deployment.yml
  • my-school-svc.yml

Let’s check the Deployment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    fabric8.io/git-commit: 0120b762d7e26994e8b01d7e85f8941e5d095130
    fabric8.io/git-branch: master
    fabric8.io/scm-tag: HEAD
    ...
  labels:
    app: myschool
    provider: fabric8
    version: 0.0.1-SNAPSHOT
    group: com.onepoint.labs
  name: myschool
  namespace: default
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: myschool
      provider: fabric8
      group: com.onepoint.labs
  template:
    metadata:
      annotations:
        fabric8.io/git-commit: 0120b762d7e26994e8b01d7e85f8941e5d095130
        fabric8.io/git-branch: master
        fabric8.io/scm-tag: HEAD
        ...
      labels:
        app: myschool
        provider: fabric8
        version: 0.0.1-SNAPSHOT
        group: com.onepoint.labs
    spec:
      containers:
      - env:
        - name: POSTGRES_SERVICE
          valueFrom:
            configMapKeyRef:
              key: postgres.service.name
              name: postgres-config
        - name: POSTGRES_DB_NAME
          valueFrom:
            configMapKeyRef:
              key: postgres.db.name
              name: postgres-config
        - name: POSTGRES_DB_USER
          valueFrom:
            secretKeyRef:
              key: db.user.name
              name: db-security
        - name: POSTGRES_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              key: db.user.password
              name: db-security
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: nebrass/myschool:snapshot-180327-003059-0437
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 180
        name: myschool
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 9779
          name: prometheus
          protocol: TCP
        - containerPort: 8778
          name: jolokia
          protocol: TCP
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 10
        securityContext:
          privileged: false
  • Generated annotations that holds many usefull data, like the git-commit id or the git-branch
  • Labels section holds the Maven Project groupId, artifactId and version information. Add to that, a label provider=fabric8 to tell you that this data is generated by f8mp
  • The Docker Image, generated and built by f8mp. The suffix snapshot-180327-003059-0437 is the default format to assign a version tag.
  • A liveness probe checks if the container in which it is configured is still up.
  • A readiness probe determines if a container is ready to service requests.

💡 Tip

The liveness and readiness probes are generated because the f8mp has detected that the Spring-Boot-Actuator library in the classpath.

At this point, we can deploy our application just using the command mvn fabric8:apply, the output will look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
[INFO] --- fabric8-maven-plugin:3.5.38:apply (default-cli) @ myschool ---
[INFO] F8: Using Kubernetes at https://192.168.99.100:8443/ in namespace default with manifest
/Users/n.lamouchi/Downloads/MyBoutiqueReactive/target/classes/META-INF/fabric8/kubernetes.yml
[INFO] Using namespace: default
[INFO] Updating a Service from kubernetes.yml
[INFO] Updated Service: target/fabric8/applyJson/default/service-myschool.json
[INFO] Using namespace: default
[INFO] Creating a Deployment from kubernetes.yml namespace default name myschool
[INFO] Created Deployment: target/fabric8/applyJson/default/deployment-myschool.json
[INFO] F8: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16.003 s
[INFO] Finished at: 2018-03-28T00:03:56+02:00
[INFO] Final Memory: 78M/756M
[INFO] ------------------------------------------------------------------------

We can check all the resources that exists on our cluster

1
$ kubectl get all

This command will list all the resources in the default namespace. The output will be something:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
NAME                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/myschool     1         1         1            1           6m
deploy/postgresql   1         1         1            1           7m

NAME                       DESIRED   CURRENT   READY     AGE
rs/myschool-5dd7cbff98     1         1         1         6m
rs/postgresql-5f57747985   1         1         1         7m

NAME                             READY     STATUS    RESTARTS   AGE
po/myschool-5dd7cbff98-w2wtl     1/1       Running   0          6m
po/postgresql-5f57747985-8n9h6   1/1       Running   0          7m

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
svc/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    23m
svc/myschool     ClusterIP   10.106.72.231   <none>        8080/TCP   20m
svc/postgresql   ClusterIP   10.111.62.173   <none>        5432/TCP   21m

💡 Tip

We can list all these resources on the K8s Dashboard.

Wow! Yes, these resources have been created during the steps that we did before :) Good job !

5. It works ! Hakuna Matata !

It’s done ^^) we deployed the application and all its required resources; but how can we access the deployed application?

The application will be accessible thru the Kubernetes Service object called myschool.

Let’s check what is the service myschool, type kubectl get svc myschool, the output will be:

1
2
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
myschool   ClusterIP   10.106.72.231   <none>        8080/TCP   1d

The type of our service is ClusterIP. What is a ClusterIP ?

ClusterIP is the default ServiceType. It exposes the service on a cluster-internal IP so it will be only reachable from within the cluster.

So we cannot use this ServiceType because we need our service to be reachable from outisde the cluster. So is there any other type of service?

Yes! There are three other types of services, other than ClusterIP:

  • NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
  • ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.

💡 Tip

In our case, we will be using the LoadBalancer service, which redirects traffic across all the nodes. Clients connect to the LoadBalancer service through the load balancer’s IP.

Ok :) The LoadBalancer will be our ServiceType. But how can we tell this to the f8mp ?

We have two solutions:

  • The Resource Fragments as we did before.
  • The Inline Configuration, which is XML based configuration of the f8mp plugin.

Let’s use this time the Inline Configuration to tell the f8mp that we want a LoadBalancer service:

In the configuration section of the f8mp plugin, we will declare an enricher.

An enricher is a component used to create and customize Kubernetes and Openshift resource objects. f8mp comes with a set of enrichers which are enabled by default. One of these enrichers, is the fmp-service which is used to customize the Services.

The f8mp with the configured enricher will look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<plugin>
    <groupId>io.fabric8</groupId>
    <artifactId>fabric8-maven-plugin</artifactId>
    <version>3.5.38</version>
    <configuration>
        <enricher>
            <config>
                <fmp-service>
                    <type>LoadBalancer</type>
                </fmp-service>
            </config>
        </enricher>
    </configuration>
    <executions>
        <execution>
            <id>fmp</id>
            <goals>
                <goal>resource</goal>
                <goal>build</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Let’s build and a redeploy our project using mvn clean install fabric8:apply and see what is the type of the deployed service using kubectl get svc myschool:

1
2
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
myschool     LoadBalancer   10.106.72.231   <pending>     8080:31246/TCP   2d

⚠️ Warning

The <pending> shown in the EXTERNAL-IP column is due that we are using minikube.

Cool ! How can we access the application now? How can we get the URL of the deployed application?

Euuuuh! The answer is shorter than the question :D to get the URL of the deployed app on minikube just type:

1
$ open $(minikube service myschool --url)

This command will open the URL of the Spring Boot Application in your default browser :)

Landing Page of the deployed application on Kubernetes

💡 Tip

The command minikube service myschool --url will give us the path of the service myschool, which is pointing on our Spring Boot Application.

How can I access the Swagger UI of my deployed App?

1
$ open $(minikube service myschool --url)/swagger-ui.html

This command will open the URL of the Spring Boot Application in your default browser :)

Swagger UI of the deployed application on Kubernetes

The full source code of this tutorial, can be found here.

IV. Conclusion & final words:

The main goal of this tutorial is to introduce you to the Kubernetes ecosystem and to let you start playing with Kubernetes.

Putting your hands on practical exercices on Kubernetes will let you master this great platform. You will be able to feel the performance of Kubernetes when you experience many use cases.

You can consider the small application that we developed as a small microservice. You can develop other small apps and make interaction with them to start playing with the microservices architecture.

What other areas around Java Microservices, Docker and Kubernetes would you like to know? Let me know at my mail. I plan on making additional tutorials on these topics. I will be happy to pick topics based on your feedback!