What is Istio?

Google presents Istio as an open platform to connect, monitor, and secure microservices.

Istio is a service mesh implementation that provides many cloud-native capabilities like:

  • Traffic management: Service Discovery, Load balancing, Failure recovery, A/B testing, Canary releases, etc…
  • Observability: Request Tracing, Metrics, Monitoring, Auditing, Logging, etc…
  • Security: ACLs, Access control, Rate limiting, End-to-end authentication, etc…

Istio delivers all these great features without any changes to the code of the microservices running with it on the same Kubernetes cluster.

In our case, we already implemented many of these features and capabilities when we were writing our microservices. If we had Istio at the beginning, we could save so much effort and time, by delegating all of these capabilities to Istio.

Istio Architecture

Istio service mesh is composed of two parts:

  • The data plane is responsible for establishing, securing, and controlling the traffic through the Service Mesh. The management components that instruct the data plane how to behave is known as the “control plane”.
  • The control plane is the brains of the mesh and exposes an API for operators to manipulate the network behaviors.

Istio Components

From the Istio Architecture diagram, we can see different components, located in different areas of the ecosystem:

Envoy

Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

Envoy is deployed as a sidecar to the relevant microservice in the same Kubernetes pod. This deployment allows Istio to extract a wealth of signals about traffic behavior as attributes. Istio can, in turn, use these attributes in Mixer to enforce policy decisions, and send them to monitoring systems to provide information about the behavior of the entire mesh.

Mixer

Mixer is a central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.

Mixer includes a flexible plugin model. This model enables Istio to interface with a variety of host environments and infrastructure backends. Thus, Istio abstracts the Envoy proxy and Istio-managed services from these details.

Pilot

Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (e.g., A/B tests, canary deployments, etc.), and resiliency (timeouts, retries, circuit breakers, etc.).

Pilot converts high level routing rules that control traffic behavior into Envoy-specific configurations, and propagates them to the sidecars at runtime. Pilot abstracts platform-specific service discovery mechanisms and synthesizes them into a standard format that any sidecar conforming with the Envoy data plane APIs can consume. This loose coupling allows Istio to run on multiple environments such as Kubernetes, Consul, or Nomad, while maintaining the same operator interface for traffic management.

Citadel

Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Using Citadel, operators can enforce policies based on service identity rather than on network controls. Starting from release 0.5, you can use Istio’s authorization feature to control who can access your services.

Galley

Galley validates user authored Istio API configuration on behalf of the other Istio control plane components. Over time, Galley will take over responsibility as the top-level configuration ingestion, processing and distribution component of Istio. It will be responsible for insulating the rest of the Istio components from the details of obtaining user configuration from the underlying platform (e.g. Kubernetes).

Getting started with Istio

Requirements

We will be playing with Istio on Kubernetes. For testing the solution, you need to have, as usual, a running Minikube on your machine.

The example of this chapter will be running on Minikube v0.35.0 with a custom config: 4 CPUs with 8Go of Memory.

Get & Install Istio

To start downloading and installing Istio, just enter the following command:

1
$ curl -L https://git.io/getLatestIstio | sh -

By the end of the command execution, you will see some message like this one:

1
2
3
Add /Users/n.lamouchi/istio-1.0.6/bin to your path; e.g copy paste in your shell and/or ~/.profile:

$ export PATH="$PATH:/Users/n.lamouchi/istio-1.0.6/bin"

Next, we will move to Istio package directory:

1
$ cd istio-1.0.6/

As the first step, you have to install Istio’s Custom Resources Definition:

1
$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

What is Custom Resources Definition ?

Custom resources definition (CRD) is a powerful feature introduced in Kubernetes 1.7 which enables users to add their own/custom objects to the Kubernetes cluster and use it like any other native Kubernetes objects.

Next, we need to install Istio’s core components. We have four different options to do this:

  • Option 1: Install Istio WITHOUT mutual TLS authentication between sidecars
  • Option 2: Install Istio WITH default mutual TLS authentication
  • Option 3: Render Kubernetes manifest with Helm and deploy with kubectl
  • Option 4: Use Helm and Tiller to manage the Istio deployment

For a production setup of Istio, it’s recommended to install with the Helm Chart (Option 4), to use all the configuration options. This permits customization of Istio to operator specific requirements.

In this tutorial, we will be using the Option 1:

To install Istio:

1
$ kubectl apply -f install/kubernetes/istio-demo.yaml

Verifying the installation

To be sure that the Istio components were correctly installed, these Kubernetes Services needs to be installed: istio-pilotistio-ingressgatewayistio-policyistio-telemetryprometheusistio-galley, and (optionally) istio-sidecar-injector:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
$ kubectl get svc -n istio-system

NAME                    TYPE          CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
grafana                 ClusterIP     10.102.186.225  <none>        3000/TCP             35m
istio-citadel           ClusterIP     10.108.239.218  <none>        8060/TCP,9093/TCP    58m
istio-egressgateway     ClusterIP     10.103.151.29   <none>        80/TCP,443/TCP       58m
istio-galley            ClusterIP     10.96.55.14     <none>        443/TCP,9093/TCP     58m
istio-ingressgateway    LoadBalancer  10.110.91.248   <pending>     80:31380/TCP..       58m
istio-pilot             ClusterIP     10.104.95.143   <none>        15010/TCP..          58m
istio-policy            ClusterIP     10.100.19.140   <none>        9091/TCP..           58m
istio-sidecar-injector  ClusterIP     10.101.13.203   <none>        443/TCP              58m
istio-telemetry         ClusterIP     10.103.135.98   <none>        9091/TCP..           58m
jaeger-agent            ClusterIP     None            <none>        5775/UDP..           35m
jaeger-collector        ClusterIP     10.110.82.2     <none>        14267/TCP,14268/TCP  35m
jaeger-query            ClusterIP     10.101.54.162   <none>        16686/TCP            35m
prometheus              ClusterIP     10.101.210.170  <none>        9090/TCP             58m
servicegraph            ClusterIP     10.99.60.12     <none>        8088/TCP             35m
tracing                 ClusterIP     10.98.62.125    <none>        80/TCP               35m
zipkin                  ClusterIP     10.100.54.120   <none>        9411/TCP             35m

For the Kubernetes Services already listed, we will find corresponding Pods:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
$ kubectl get svc -n istio-system
                                                                                                                                 
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-59b8896965-n4rvr                  1/1     Running     0          91m
istio-citadel-6f444d9999-xc27q            1/1     Running     0          91m
istio-cleanup-secrets-k2dzg               0/1     Completed   0          91m
istio-egressgateway-6d79447874-8twkk      1/1     Running     0          91m
istio-galley-685bb48846-tht5x             1/1     Running     0          91m
istio-grafana-post-install-rmflr          0/1     Completed   0          91m
istio-ingressgateway-5b64fffc9f-56tm2     1/1     Running     0          91m
istio-pilot-7f558fc848-2fscl              2/2     Running     0          91m
istio-policy-547d64b8d7-skpzm             2/2     Running     0          91m
istio-security-post-install-nlfmd         0/1     Completed   0          91m
istio-sidecar-injector-5d8dd9448d-rdbq8   1/1     Running     0          91m
istio-telemetry-c5488fc49-b8sv8           2/2     Running     0          91m
istio-tracing-6b994895fd-6wxvx            1/1     Running     0          91m
prometheus-76b7745b64-2tgkw               1/1     Running     0          91m
servicegraph-cb9b94c-2x5cb                1/1     Running     1          91m

All Pods need to be in the Running status, except the istio-cleanup-secrets-*istio-grafana-post-install-* and istio-security-post-install-* Pods, which will be in the Completed status. These three Completed Pods are started and executed at a post-installation phase, to do post-installation tasks like cleaning the installation secrets, etc…

Envoy Sidecar Injection

In the service mesh world, a sidecar is a utility container in the pod, and its purpose is to support the main container. In the Istio case, the sidecar will be an Envoy proxy that will be deployed to each pod. The process of adding Envoy into a pod is called Sidecar Injection. This action can be done in two ways:

  • Automatically using the Istio Sidecar Injector : Automatic injection injects at pod creation time. The controller resource is unmodified. Sidecars can be updated selectively by manually deleting a pods or systematically with a deployment rolling update.
  • Manually using the Istioctl-CLI tool: Manual injection modifies the controller configuration, e.g. deployment. It does this by modifying the pod template spec such that all pods for that deployment are created with the injected sidecar. Adding, Updating or Removing the sidecar requires modifying the entire deployment.

Automatic Sidecar Injection

To enable the Automatic Sidecar Inject just add the istio-injection label to the Kubernetes namespace: For example to enable it in the default namespace:

1
$ kubectl label namespace default istio-injection=enabled --overwrite

Now, when a pod will be created, the Envoy sidecar is automatically injected inside it.

Manual Sidecar Injection

Inject the sidecar into the deployment using the in-cluster configuration. <1>

1
2
3
4
$ istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml \
                        --output bookinfo-injected.yaml
            
$ kubectl apply -f bookinfo-injected.yaml

BookInfo Sample Application

This example deploys a sample application composed of four separate microservices used to demonstrate various Istio features. The lication displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a cription of the book, book details (ISBN, number of pages, and so on), and a few book reviews.

The Bookinfo application is broken into four separate microservices:

  • productpage: The productpage microservice calls the details and reviews microservices to populate the page.
  • details: The details microservice contains book information.
  • reviews: The reviews microservice contains book reviews. It also calls the ratings microservice.
  • ratings: The ratings microservice contains book ranking information that accompanies a book review.

There are 3 versions of the reviews microservice:

  • Version v1 doesn’t call the ratings service.
  • Version v2 calls the ratings service, and displays each rating as 1 to 5 black stars.
  • Version v3 calls the ratings service, and displays each rating as 1 to 5 red stars.

These two commands can be done in one command:

1
$ kubectl create -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

or also:

1
$ istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml | kubectl apply -f -

These commands will inject the Istio Envoy sidecar into the Kubernetes Deployment object.

For the sample Deployment of the details-v1 microservice which looks like this:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: details-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: details
        version: v1
    spec:
      containers:
      - name: details
        image: istio/examples-bookinfo-details-v1:1.8.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080

The _Deployment_ after the injection of Istio will look like:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  name: details-v1
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      annotations:
        sidecar.istio.io/status: '...'
      creationTimestamp: null
      labels:
        app: details
        version: v1
    spec:
      containers:
      - name: details
        image: istio/examples-bookinfo-details-v1:1.8.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080
        resources: {}
      - args:
        - proxy
        - sidecar
        - --configPath
        - /etc/istio/proxy
        - --binaryPath
        - /usr/local/bin/envoy
        - --serviceCluster
        - details
        - --drainDuration
        - 45s
        - --parentShutdownDuration
        - 1m0s
        - --discoveryAddress
        - istio-pilot.istio-system:15007
        - --discoveryRefreshDelay
        - 1s
        - --zipkinAddress
        - zipkin.istio-system:9411
        - --connectTimeout
        - 10s
        - --proxyAdminPort
        - "15000"
        - --controlPlaneAuthPolicy
        - NONE
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: ISTIO_META_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        - name: ISTIO_METAJSON_LABELS
          value: |
            {"app":"details","version":"v1"}            
        image: docker.io/istio/proxyv2:1.0.6
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        resources:
          requests:
            cpu: 10m
        securityContext:
          readOnlyRootFilesystem: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/certs/
          name: istio-certs
          readOnly: true
      initContainers:
      - args:
        - -p
        - "15001"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - "9080"
        - -d
        - ""
        image: docker.io/istio/proxy_init:1.0.6
        imagePullPolicy: IfNotPresent
        name: istio-init
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
          privileged: true
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - name: istio-certs
        secret:
          optional: true
          secretName: istio.default
status: {}

All the extra paramters and configuration are added via istioctl kube-inject command.

After executing the command:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
$ kubectl create -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

service/details created
deployment.extensions/details-v1 created
service/ratings created
deployment.extensions/ratings-v1 created
service/reviews created
deployment.extensions/reviews-v1 created
deployment.extensions/reviews-v2 created
deployment.extensions/reviews-v3 created
service/productpage created
deployment.extensions/productpage-v1 created

We can verify that the sidecar is deployed in the same Deployment as the microservice, just type:

1
2
3
4
$ kubectl get deployment details-v1 -o wide

NAME         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS            IMAGES    SELECTOR
details-v1   1/1     1            1           63m   details,istio-proxy   ...       ...

We can even see that there are two containers in the details-v1-* pod:

To be sure that everything is ok, we need to verify that the BookInfo services & pods are here:

1
2
3
4
5
6
7
8
$ kubectl get svc

NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.106.82.61    <none>        9080/TCP   4h51m
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP    11h
productpage   ClusterIP   10.100.126.93   <none>        9080/TCP   4h51m
ratings       ClusterIP   10.98.201.254   <none>        9080/TCP   4h51m
reviews       ClusterIP   10.110.241.59   <none>        9080/TCP   4h51m

And :

1
2
3
4
5
6
7
8
9
$ kubectl get pod

NAME                              READY   STATUS    RESTARTS   AGE
details-v1-55bc45969c-99xj2       2/2     Running   0          4h52m
productpage-v1-5b597ff459-pfpmt   2/2     Running   0          4h52m
ratings-v1-7877895db5-vchw2       2/2     Running   0          4h52m
reviews-v1-699587b49b-bt7z5       2/2     Running   0          4h52m
reviews-v2-cc7cd59cc-k6l6j        2/2     Running   0          4h52m
reviews-v3-6fbcf56df8-nc2lf       2/2     Running   0          4h52m

Now, we can go on to next steps and enjoy the great Istio features :)

Traffic Management

Istio Gateway & VirtualService

Now that the Bookinfo services are up and running, we need to make the Services accessible from outside of your Kubernetes cluster. An Istio Gateway object is used for this purpose.

An Istio Gateway configures a load balancer for HTTP/TCP traffic at the edge of the service mesh and enables Ingress traffic for an application. Unlike Kubernetes IngressIstio Gateway only configures the L4-L6 functions (for example, ports to expose, TLS configuration). Users can then use standard Istio rules to control HTTP requests as well as TCP traffic entering a Gateway by binding a VirtualService to it.

We can define the Ingress gateway for the Bookinfo application using the sample gateway configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: bookinfo-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

VirtualService defines the rules that control how requests for a service are routed within an Istio service mesh. For example, a VirtualService could route requests to different versions of a service or to a completely different service than was requested. Requests can be routed based on the request source and destination, HTTP paths and header fields, and weights associated with individual service versions.

The VirtualService configuration looks like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: bookinfo
spec:
  hosts:
  - "*"
  gateways:
  - bookinfo-gateway
  http:
  - match:
    - uri:
        exact: /productpage
    - uri:
        exact: /login
    - uri:
        exact: /logout
    - uri:
        prefix: /api/v1/products
    route:
    - destination:
        host: productpage
        port:
          number: 9080

Let’s create the Istio Gateway and the VirtualService:

1
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

Confirm the gateway has been created:

1
2
3
4
$ kubectl get gateway

NAME               AGE
bookinfo-gateway   59m

Let’s export the INGRESS_HOST, the INGRESS_PORT and the GATEWAY_URL:

1
2
3
4
5
6
$ export INGRESS_HOST=$(minikube ip)

$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway \
                    -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

To test the Gateway:

1
$ curl -o /dev/null -s -w "%{http_code}\n" http://$GATEWAY_URL/productpage

You must have 200 as response code.

You can also point your browser to http://$GATEWAY_URL/productpage to view the Bookinfo web page. If you refresh the page several times, you should see different versions of reviews shown in productpage, presented in a round robin style (red stars, black stars, no stars), since we haven’t yet used Istio to control the version routing.

If we refresh the web page some times, we will get different versions of the reviews structure: One version has red stars (the one that we got in the screenshot), an other one that have back stars and a third one without starts. This is due to the availability of three versions of the reviews microservice deployed in our sample BookInfo application.

You can verify the availability of the three versions of the reviews microservice:

1
2
3
4
5
6
$ kubectl get pods -l app=reviews

NAME                          READY   STATUS    RESTARTS   AGE
reviews-v1-699587b49b-v44wr   2/2     Running   0          5h31m
reviews-v2-cc7cd59cc-rgc86    2/2     Running   0          5h31m
reviews-v3-6fbcf56df8-wndtz   2/2     Running   0          5h31m

We got different versions of reviews structure while refreshing because Istio, by default, dispatches access to loadbalanced services using a Round Robin scheduling. One of the great features of Istio is to route the traffic to some dedicated version of a service, or even using sliced dispatching of requests.

In the next steps, we will see how to route the traffic based on version.

Destination Rules

Before we can use Istio to control the Bookinfo version routing, we need to define the available versions of an application, called subsets. These subsets are defined in an Istio object called DestinationRule/The choice of version to display can be decided based on criteria (headers, URL, etc…) defined to each version. We can enjoy this flexibility of criterias to do Blue-green Deployments, A/B Testing, and Canary Releases.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: productpage
spec:
  host: productpage <1>
  subsets:
  - name: v1 <2>
    labels:
      version: v1 <2>
  1. The Kubernetes Service name on which we will be routing the traffic
  2. The subset element name
  3. The subset element version

We will create the default destination rules for the Bookinfo services, using the sample destination-rule-all.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: productpage
spec:
  host: productpage
  subsets:
  - name: v1
    labels:
      version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: reviews
spec:
  host: reviews
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  - name: v3
    labels:
      version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: ratings
spec:
  host: ratings
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  - name: v2-mysql
    labels:
      version: v2-mysql
  - name: v2-mysql-vm
    labels:
      version: v2-mysql-vm
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: details
spec:
  host: details
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
---

Run the following command to create default Destination Rules:

1
$ kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml

Wait a few seconds for the destination rules to propagate.

You can verify that the destination rules are correctly created, using the following command:

1
$ kubectl get destinationrules -o yaml

Now, we will change the default round-robin behavior for traffic routing: we will route all traffic to reviews-v1, using a new VirtualServices configuration that looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1

The BookInfo sample brings a virtual-service-all-v1.yaml that holds the necessary configuration of the new VirtualServices. To deploy it, just type:

1
$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

You can verify that the review VirtualService is correctly created, using the following command:

1
$ kubectl get virtualservices reviews -o yaml

Try now to reload the page multiple times, and note how only reviews:v1 is displayed each time:

Next, we will change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named Jason will be routed to the service reviews:v2.

Next, we will be routing traffic for a specific user to a specific service version. In our example, we will be routing all the traffic for a user named jason to the service reviews:v2.

The virtual-service-reviews-test-v2.yaml configuration covers this case:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
spec:
  hosts:
  - reviews
  http:
  - match:
    - headers:
        end-user:
          exact: jason
    route:
    - destination:
        host: reviews
        subset: v2
  - route:
    - destination:
        host: reviews
        subset: v1

To deploy it, just type:

1
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

You can verify that the review VirtualService is correctly created, using the following command:

1
$ kubectl get virtualservices reviews -o yaml

To test the new VirtualService, just click on the Sign button and login using jason as username and any value as password. Now, we will see reviews:v2. If you logout, we will get back the reviews:v1:

Next, we will see how to gradually migrate traffic from one version of a microservice to another one. In our example, we will send 50% of traffic to reviews:v1 and 50% to reviews:v3:

1
$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml

Let’s verify the content of the new VirtualService:

1
$ kubectl get virtualservice reviews -o yaml

The subset is set to 50% of traffic to the v1 and 50% to the v3:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
spec:
  hosts:
  - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 50
    - destination:
        host: reviews
        subset: v3
      weight: 50

Try now to reload the page multiple times, you will see diversely reviews:v1 and reviews:v3.

Observability

Distributed Tracing

Hello Jaeger

To access the Jaeger Dashboard, establish port forwarding from local port 16686 to the Tracing instance:

1
2
3
$ kubectl port-forward -n istio-system \
  $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') \
  16686:16686

In your browser, go to http://127.0.0.1:16686

From the Services menu, select productpage service.

Scroll to the bottom and click on Find Traces button to see traces:

If you click on a trace, you should see more details. The page should look something like this:

When invoking the /productpage, many BookInfo services are called. These services correspond to spans and the page call itself corresponds to the trace.

Although Istio proxies are able to automatically send spans, they need some hints to tie together the entire trace. Applications need to propagate the appropriate HTTP headers so that when the proxies send span information, the spans can be correlated correctly into a single trace.

To do this, an application needs to collect and propagate the following headers from the incoming request to any outgoing requests:

  • x-request-id
  • x-b3-traceid
  • x-b3-spanid
  • x-b3-parentspanid
  • x-b3-sampled
  • x-b3-flags
  • x-ot-span-context

When you make downstream calls in your applications, make sure to include these headers.

Trace sampling

When using the Bookinfo sample application above, every time you access /productpage you see a corresponding trace in the Jaeger dashboard. This sampling rate (which is 100% in the BookInfo example) is suitable for a test or low traffic mesh, which is why it is used as the default for the demo installs.

In other configurations, Istio defaults to generating trace spans for 1 out of every 100 requests (sampling rate of of 1%).

You can control the trace sampling percentage in one of two ways:

In a running mesh, edit the istio-pilot deployment and change the environment variable with the following steps:

  1. To open your text editor with the deployment configuration file loaded, run the following command:
1
$ kubectl -n istio-system edit deploy istio-pilot
  1. Find the PILOT_TRACE_SAMPLING environment variable, and change the value: to your desired percentage.

In both cases, valid values are from 0.0 to 100.0 with a precision of 0.01.

Grafana

The Grafana add-on in Istio is a preconfigured instance of Grafana. The base image (grafana/grafana:5.0.4) has been modified to start with both a Prometheus data source and the Istio Dashboard installed. The base install files for Istio, and Mixer in particular, ship with a default configuration of global (used for every service) metrics. The Istio Dashboard is built to be used in conjunction with the default Istio metrics configuration and a Prometheus backend.

Establish port forwarding from local port 3000 to the Grafana instance:

1
2
3
$ kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') \
  3000:3000

Browse to http://localhost:3000 and navigate to the Istio Mesh Dashboard:

 

Prometheus

Mixer comes with a built-in Prometheus adapter that exposes an endpoint serving generated metric values. The Prometheus add-on is a Prometheus server that comes preconfigured to scrape Mixer endpoints to collect the exposed metrics. It provides a mechanism for persistent storage and querying of Istio metrics.

To access the Prometheus Dashboard, establish port forwarding from local port 9090 to Prometheus instance:

1
2
3
$ kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') \
  9090:9090

Browse to http://localhost:9090/graph, and in the “Expression” input box, enter: istio_request_byte_count. Click Execute:

 

Service Graph

The ServiceGraph service provides endpoints for generating and visualizing a graph of services within a mesh. It exposes the following endpoints:

  • /force/forcegraph.html As explored above, this is an interactive D3.js visualization.
  • /dotviz is a static Graphviz visualization.
  • /dotgraph provides a DOT serialization.
  • /d3graph provides a JSON serialization for D3 visualization.
  • /graph provides a generic JSON serialization.

To access the Service Graph Dashboard, establish port forwarding from local port 8088 to Service Graph instance:

1
2
3
$ kubectl -n istio-system port-forward \
  $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') \
  8088:8088

Browse to http://localhost:8088/dotviz:

 

The ServiceGraph example is built on top of Prometheus queries and depends on the standard Istio metric configuration.

Conclusion

That’s all folks! We have been presenting the main concepts and components of Istio Service Mesh. This introducing chapter is just for installation and core concepts of Istio. As we saw in the Traffic Management section, there are infinite scenarios and use cases that you want to cover, like fault injection, controlling Ingress and Egress traffic, circuit breakers, etc..

We did not have the opportunity to cover the Security capabilities of Istio, just because it needs so much chapters to be covered. 😛

Stay tuned, I will be writing some quickies about more great capabilities of Istio ! 😎