This sample deploys a simple application composed of four separate microservices which will be used to demonstrate various features of the Istio service mesh.
If you use GKE, please ensure your cluster has at least 4 standard GKE nodes.
Setup Istio by following the instructions in the Installation guide.
In this sample we will deploy a simple application that displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages, and so on), and a few book reviews.
The BookInfo application is broken into four separate microservices:
There are 3 versions of the reviews microservice:
The end-to-end architecture of the application is shown below.
This application is polyglot, i.e., the microservices are written in different languages.
Change directory to the root of the Istio installation directory.
Bring up the application containers:
kubectl apply -f <(istioctl kube-inject -f samples/apps/bookinfo/bookinfo.yaml)
The above command launches four microservices and creates the gateway ingress resource as illustrated in the diagram below. The reviews microservice has 3 versions: v1, v2, and v3.
Note that in a realistic deployment, new versions of a microservice are deployed over time instead of deploying all versions simultaneously.
Notice that the istioctl kube-inject
command is used to modify the bookinfo.yaml
file before creating the deployments. This injects Envoy into Kubernetes resources as documented here. Consequently, all of the microservices are now packaged with an Envoy sidecar that manages incoming and outgoing calls for the service. The updated diagram looks like this:
Confirm all services and pods are correctly defined and running:
kubectl get services
which produces the following output:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details 10.0.0.31 <none> 9080/TCP 6m
istio-ingress 10.0.0.122 <pending> 80:31565/TCP 8m
istio-manager 10.0.0.189 <none> 8080/TCP 8m
istio-mixer 10.0.0.132 <none> 9091/TCP,42422/TCP 8m
kubernetes 10.0.0.1 <none> 443/TCP 14d
productpage 10.0.0.120 <none> 9080/TCP 6m
ratings 10.0.0.15 <none> 9080/TCP 6m
reviews 10.0.0.170 <none> 9080/TCP 6m
and
kubectl get pods
which produces
NAME READY STATUS RESTARTS AGE
details-v1-1520924117-48z17 2/2 Running 0 6m
istio-ingress-3181829929-xrrk5 1/1 Running 0 8m
istio-manager-175173354-d6jm7 2/2 Running 0 8m
istio-mixer-3883863574-jt09j 2/2 Running 0 8m
productpage-v1-560495357-jk1lz 2/2 Running 0 6m
ratings-v1-734492171-rnr5l 2/2 Running 0 6m
reviews-v1-874083890-f0qf0 2/2 Running 0 6m
reviews-v2-1343845940-b34q5 2/2 Running 0 6m
reviews-v3-1813607990-8ch52 2/2 Running 0 6m
Determine the gateway ingress URL:
kubectl get ingress -o wide
NAME HOSTS ADDRESS PORTS AGE
gateway * 130.211.10.121 80 1d
If your Kubernetes cluster is running in an environment that supports external load balancers, and the Istio ingress service was able to obtain an External IP, the ingress resource ADDRESS will be equal to the ingress service external IP.
export GATEWAY_URL=130.211.10.121:80
Sometimes when the service is unable to obtain an external IP, the ingress ADDRESS may display a list of NodePort addresses. In this case, you can use any of the addresses, along with the NodePort, to access the ingress. If, however, the cluster has a firewall, you will also need to create a firewall rule to allow TCP traffic to the NodePort. In GKE, for instance, you can create a firewall rule using the following command:
gcloud compute firewall-rules create allow-book --allow tcp:$(kubectl get svc istio-ingress -o jsonpath='{.spec.ports[0].nodePort}')
If your deployment environment does not support external load balancers (e.g., minikube), the ADDRESS field will be empty. In this case you can use the service NodePort instead:
export GATEWAY_URL=$(kubectl get po -l istio=ingress -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')
Confirm that the BookInfo application is running with the following curl
command:
curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage
200
When you’re finished experimenting with the BookInfo sample, you can uninstall it as follows:
Delete the routing rules and terminate the application pods
samples/apps/bookinfo/cleanup.sh
Confirm shutdown
istioctl get route-rules #-- there should be no more routing rules
kubectl get pods #-- the BookInfo pods should be deleted
Now that you have the BookInfo sample up and running, you can point your browser to http://$GATEWAY_URL/productpage
to see the running application and use Istio to control traffic routing, inject faults, rate limit services, etc..
To get started, check out the request routing task.