Today, Signal Sciences announced another industry-first: the launch of our next-gen WAF integration with Istio…
Examples of common install patterns of Signal Sciences in Kubernetes
The Many Flavors of Kubernetes
Recently I have found that Kubernetes has become commonplace within most organizations that I work with. There are many permutations of what form Kubernetes takes in these organizations. Whether companies choose to use any of the alphabet soup (EKS, AKS, PKS, GKE, etc), on-premise, or as an underlying technology in products like Mesosphere there is generally some types of Kubernetes in their environment. As part of this I wanted to highlight additional working examples of deploying in the different layers with the sidecar method as mentioned in a previous blog at 7 Ways To Deploy SigSci in Kubernetes. This walkthrough/blog will focus on the sidecar method but I have in the past posted about the same container method in Using Signal Sciences with Kubernetes.
First, I shamelessly reused the great work Aaron Jahoda, one of our sales engineering managers, did for setting up Signal Sciences as a sidecar container for an nginx ingress controller.
The full working example sources and a way to get a minikube environment up and running are also documented on my Github at:
https://github.com/dacoburn/sigsci-ingress-micro-reverse-proxy
Additionally, if you would like to watch a demo of going through this process you can watch in the embedded video I have recorded.
Architecture of the Web Apps
In this example Kubernetes deployment we will have the NGINX Ingress Controller and the Signal Sciences Reverse Proxy layers exposed internally and externally. Our two Microservices will not be exposed externally and only accessible via the other two services. The NGINX Ingress Controller will have two rules: one for Apache, and one for NGINX. Finally, the Signal Sciences Reverse proxy will forward all traffic for a port to one of the microservices.
Building All of the Docker Containers
I find it easiest to go through and create all of the containers I will be using in the deployment beforehand. Although I do have the containers pushed to my Docker hub account, so you can just do the deployments, my recommendation is to step through the process of creating the containers from scratch. I use Aaron’s example for the ingress controller container creation but use a different template for the Signal Sciences agent container.
For all the steps you will need to clone the repo from https://github.com/dacoburn/sigsci-ingress-micro-reverse-proxy.
Signal Sciences Agent Container
Change directory to the SigSciAgentAlpine folder
cd SigSciAgentAlpine
Build the container using make
make build dockeruser=MYUSER dockername=sigsci-agent
Push the container to dockerhub
make deploy dockeruser=MYUSER dockername=sigsci-agent
Signal Sciences NGINX (microservice-a) Container
Change directory to the SigSciDebianNginx folder
cd SigSciDebianNginx
Build the container using make
make build dockeruser=MYUSER dockername=microservice-a
Push the container to dockerhub
make deploy dockeruser=MYUSER dockername=microservice-a
Signal Sciences Apache (microservice-b) Container
Change directory to the SigSciApacheAlpine folder
cd SigSciApacheAlpine
Build the container using make
make build dockeruser=MYUSER dockername=microservice-b
Push the container to dockerhub
make deploy dockeruser=MYUSER dockername=microservice-b
Ingress Controller Container
Update the mandatory.yaml
- The container names
- Volume mount details
- Update the Signal Sciences agent container name
- Update the access keys for the Signal Sciences agent
- Run: `docker build -t myregistry/sigsci-module-nginx-ingress:0.22.0 sigsci-module-nginx-ingress/
- Push the container to dockerhub: docker push myregistry/sigsci-module-nginx-ingress:0.22.0
Running the Kubernetes deployments
Now that we have the base containers built we can start running through the deployments that are contained in [build_env.yaml.example]. There a few different deployments in here and some of them might not be needed in your specific Kubernetes environment. I do recommend looking through them to see which ones are not needed. I also hope that if you are not a Kubernetes expert, like me, that I can help avoid some pitfalls I ran into when doing this exercise! One of the main items is to please make sure that the ingress controller and the apps are in the same namespace. You can not use an ingress controller for an app in different namespaces.
For the purposes of this walkthrough I will detail the directions assuming you are using the “+ CREATE” button from the Kubernetes UI. You can also save the example yaml sections to files and apply doing kubectl -f yaml_file.yaml.
Persistent Volumes
As I was creating this in minikube I did not have a Persistent Volume (PV) to use. To create this, I logd into the minikube VM, created a directory, and then setup the PV in Kubernetes. We use PVs, or shared volumes, in this configuration in order for the Signal Sciences Agent and Module to be able to communicate off a shared socket file. The Agent listens on the socket file and the module will connect to the socket file to send data and receive responses. Trying to manage TCP connections and dynamically tell the module is much more complicated as you need a 1:1 mapping of the agent and module.
Import the Config Maps
This configmap configuration is used to tell the ingress controller to load the Signal Sciences Native module.
kind: ConfigMap apiVersion: v1 data: main-snippet: load_module /usr/lib/nginx/modules/ngx_http_sigsci_module.so; metadata: name: nginx-configuration namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller
The following two deployments are configmaps for specifying using the ingress controller on tcp and udp.
kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller
Importing the Cluster Roles for the Ingress Controller
These might or might not be needed in your environment depending on if you are using RBAC (Role based Access Control) or not.
apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: default
The Fun Part! Deploying the Actual Web Apps
Ingress Controller
The deployment yaml is somewhat long for the ingress controller as it is doing two containers and configuring NGINX.
Key call outs from the YAML:
- We specify the container we created, my docker username is “trickyhu”
image: trickyhu/sigsci-module-nginx-ingress:0.22.0
- The ingress controller needs elevated permissions in order to bind to low ports.
allowPrivilegeEscalation: true
- Both containers will have a volume mount path. In this case we just did /var/run
volumeMounts:
- name: sigsci-socket
mountPath: /var/run/
- Just like for the ingress controller we specify the image for the agent
image: trickyhu/sigsci-agent-alpine:latest
- Another addition is specifying the environment variables for the Signal Sciences agent
- name: SIGSCI_ACCESSKEYID
value: REPLACE_ME_INGRESS_SITE
- name: SIGSCI_SECRETACCESSKEY
value: REPLACE_ME_INGRESS_SITE
Full Yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: default labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller template: metadata: labels: app.kubernetes.io/name: nginx-ingress-controller app.kubernetes.io/part-of: nginx-ingress-controller annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: trickyhu/sigsci-module-nginx-ingress:0.22.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/nginx-ingress-controller - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: sigsci-socket mountPath: /var/run/ ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 - name: sigsci-agent image: trickyhu/sigsci-agent-alpine:latest volumeMounts: - name: sigsci-socket mountPath: /var/run/ env: - name: SIGSCI_ACCESSKEYID value: REPLACE_ME_INGRESS_SITE - name: SIGSCI_SECRETACCESSKEY value: REPLACE_ME_INGRESS_SITE volumes: - name: sigsci-socket persistentVolumeClaim: claimName: ingress
Microservice A
These next two deployments are pretty simple and basically the same. The main difference is going to be the Containers called in the deployment.
In the NGINX Container we have a couple different environment variables that we set for configuring where the SigSci NGINX Module will communicate with the agent and the port for NGINX.
- name: SIGSCI_SOCKET value: unix:/var/run/sigsci/micro-a.sock - name: SIGSCI_NGINX_PORT value: "8282"
We again specify the image name of the container we created earlier.
image: trickyhu/sigsci-debian-nginx:latest
Since we now have a web server that is listening on a port we specify this
- containerPort: 8282
Of course we use our volume mount to mount the location into the container. This being done in this example to /var/run/sigsci
volumeMounts: - mountPath: /var/run/sigsci name: host-mount
For the Signal Sciences Agent Container we specify the environment variables. Note we have the access keys, along with a tag for the hostname, and the custom location of the socket file.
env: - name: SIGSCI_RPC_ADDRESS value: unix:/var/run/sigsci/micro-a.sock - name: SIGSCI_HOSTNAME value: micro-a - name: SIGSCI_SECRETACCESSKEY value: REPLACE_ME_MICROA_SITE - name: SIGSCI_ACCESSKEYID value: REPLACE_ME_MICROA_SITE
Next the agent container is called
image: trickyhu/sigsci-agent-alpine:latest
Full Yaml:
apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" creationTimestamp: null generation: 1 labels: k8s-app: microservice-a name: microservice-a selfLink: /apis/extensions/v1beta1/namespaces/nginx-ingress-controller/deployments/microservice-a spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: microservice-a strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: k8s-app: microservice-a name: microservice-a spec: containers: - env: - name: SIGSCI_SOCKET value: unix:/var/run/sigsci/micro-a.sock - name: SIGSCI_NGINX_PORT value: "8282" image: trickyhu/sigsci-debian-nginx:latest imagePullPolicy: Always name: microservice-a ports: - containerPort: 8282 protocol: TCP resources: {} securityContext: privileged: false procMount: Default terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/sigsci name: host-mount - env: - name: SIGSCI_RPC_ADDRESS value: unix:/var/run/sigsci/micro-a.sock - name: SIGSCI_HOSTNAME value: micro-a - name: SIGSCI_SECRETACCESSKEY value: REPLACE_ME_MICROA_SITE - name: SIGSCI_ACCESSKEYID value: REPLACE_ME_MICROA_SITE image: trickyhu/sigsci-agent-alpine:latest imagePullPolicy: Always name: sigsci-agent-alpine resources: {} securityContext: privileged: false procMount: Default terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/sigsci name: host-mount dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: host-mount persistentVolumeClaim: claimName: microservice-a
Microservice B
This deployment is basically the same as the last one except for:
- The apache environment variables for the Web Service container are different
- env: - name: sigsci_rpc value: \/var\/run\/sigsci\/micro-b.sock - name: apache_port value: "8282"
The Apache Container is called
image: trickyhu/sigsci-apache-alpine:latest
After that everything else is the same. Full Yaml:
apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" creationTimestamp: null generation: 1 labels: k8s-app: microservice-b name: microservice-b selfLink: /apis/extensions/v1beta1/namespaces/nginx-ingress-controller/deployments/microservice-b spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: microservice-b strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: k8s-app: microservice-b name: microservice-b spec: containers: - env: - name: sigsci_rpc value: \/var\/run\/sigsci\/micro-b.sock - name: apache_port value: "8282" image: trickyhu/sigsci-apache-alpine:latest imagePullPolicy: Always name: microservice-b ports: - containerPort: 8282 protocol: TCP resources: {} securityContext: privileged: false procMount: Default terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/sigsci name: host-mount - env: - name: SIGSCI_RPC_ADDRESS value: unix:/var/run/sigsci/micro-b.sock - name: SIGSCI_HOSTNAME value: micro-b - name: SIGSCI_SECRETACCESSKEY value: REPLACE_ME_MICROB_SITE - name: SIGSCI_ACCESSKEYID value: REPLACE_ME_MICROB_SITE image: trickyhu/sigsci-agent-alpine:latest imagePullPolicy: Always name: sigsci-agent-alpine resources: {} securityContext: privileged: false procMount: Default terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/sigsci name: host-mount dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 volumes: - name: host-mount persistentVolumeClaim: claimName: microservice-b
Ingress Configuration
The next Yaml is for configuring the Ingress routing. This is a pretty simplistic example that routes requests to /nginx to Microservice A and /apache to Microservice B.
Yaml:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress-controller annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - host: minikube - http: paths: - path: /nginx(/|$)(.*) backend: serviceName: microservice-a servicePort: 8282 - path: /apache(/|$)(.*) backend: serviceName: microservice-b servicePort: 8282
Reverse Proxy
The last Yaml we deploy will be for the reverse proxy that will forward traffic to the upstream service of the Kubernetes Service microservice-a. With this deployment it will be a single container that just has the Signal Sciences Agent.
For the agent container notice the difference of using SIGSCI_REVPROXY_LISTENER to configure the agent to run in Reverse Proxy mode. In our demo here we setup a HTTP listener but it is possible to also do a HTTPS listener.
Listener refers to the IP/Port that the agent will listen on. The recommendation is to use 0.0.0.0 so it binds to all available interfaces. If you use localhost or 127.0.0.1 then the port binding will not be available outside of the container.
Upstreams refers to the server, or servers, that the Reverse Proxy will be forwarding traffic to.
name: SIGSCI_REVPROXY_LISTENER value: example:{listener=http://0.0.0.0:80,upstreams=http://microservice-a:8282}
Full Yaml:
apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" creationTimestamp: null generation: 1 labels: k8s-app: reverse-proxy name: reverse-proxy selfLink: /apis/extensions/v1beta1/namespaces/nginx-ingress-controller/deployments/reverse-proxy spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: reverse-proxy strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: k8s-app: reverse-proxy name: reverse-proxy spec: containers: - env: - name: SIGSCI_REVPROXY_LISTENER value: example:{listener=http://0.0.0.0:80,upstreams=http://microservice-a:8282} - name: SIGSCI_HOSTNAME value: sigscirp - name: SIGSCI_SECRETACCESSKEY value: REPLACE_ME_RP_SITE - name: SIGSCI_ACCESSKEYID value: REPLACE_ME_RP_SITE image: trickyhu/sigsci-agent-alpine:latest imagePullPolicy: Always name: sigsci-agent-alpine ports: - containerPort: 80 protocol: TCP resources: {} securityContext: procMount: Default terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30
Last Steps
Now that we have everything deployed we can expose the various services so that are available for accessing.
The ingress controller and the reverse proxy are exposed externally so that we can access them remotely. They will be forwarding to the microservices.
kubectl expose deployment nginx-ingress-controller --type=NodePort kubectl expose deployment reverse-proxy --type=NodePort
The microservices are only exposed internally in Kubernetes so that the ingress and reverse proxy can communicate with them but not be accessible outside of the cluster.
kubectl expose deployment microservice-a kubectl expose deployment microservice-b
Ingress, Service, and Web App Layer Deployment
You can now hit the various services through your Kubernetes environment. These simple deployment examples make it very easy to deploy Signal Sciences at the ingress, service, and app layers.