Leverage PodSpec to customize the Fission runtime and builder pods

Leverage PodSpec to customize the Fission runtime and builder pods

InfraCloud Team
InfraCloud Team

Deploying your workload on the Kubernetes cluster using Fission involves creating some Fission resources for example Fission environment and functions. If you are not familiar with the basic constructs of Fission I would suggest you go through this article in order to have that basic understanding. When we create a Fission environment we can provide the details of which runtime the functions are going to run on, and how to build the provided source code to create a Fission function. We use --image and --buider flags to specify the environment runtime and builder images respectively.

Below command that helps us create a Fission environment to deploy python code,

Ā» fission env create --name python --image fission/python-env:latest --builder fission/python-builder:latest

would eventually create some Fission function runtime pods in the fission-function namespace and builder pod in the fission-builder namespace as shown below.

Ā» kubectl get pods -n fission-function
NAME                                            READY   STATUS    RESTARTS   AGE
poolmgr-python-default-34042-658df7c758-c9mr5   2/2     Running   0          4m55s
poolmgr-python-default-34042-658df7c758-lqq2c   2/2     Running   0          4m55s
poolmgr-python-default-34042-658df7c758-nfxqd   2/2     Running   0          4m55s

Ā» kubectl get pods -n fission-builder
NAME                           READY   STATUS    RESTARTS   AGE
python-34042-54c7ccdc9-vqwnd   2/2     Running   0          5m10s

The pods in the fission-function namespaces are the pods that are actually going to serve the requests to your function and the pod in the fission-builder namespace is going to help us build the source code with the dependencies if there is a need.

Now, the pods that get created after running that env create command, in fission-function and fission-builder are being run with some default settings or spec we could say. For example the pods in the fission-function namespace have this toleration by default,

tolerations:
- effect: NoExecute
  key: node.kubernetes.io/not-ready
  operator: Exists
  tolerationSeconds: 300
- effect: NoExecute
  key: node.kubernetes.io/unreachable
  operator: Exists
  tolerationSeconds: 300

this simply means that the pods have toleration if the node is tainted with node.kubernetes.io/not-readyor node.kubernetes.io/unreachable.

This is where we can leverage Fission podspec to customize the spec of the pods that get created in the fission-function and fission-builder namespaces.

Letā€™s take an example of a Kubernetes cluster that is set up using kind with a single node. And we donā€™t want to schedule pods on this node, we can taint this node and after that, the pods will only be scheduled on this node if and only if they have toleration for specified taint on the node. Letā€™s take a look at the node that we have in the cluster and then we can taint the node to show, how the pods will not be scheduled on the node.

Ā» kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kind-control-plane   Ready    master   8h    v1.15.3

And if we describe the node we donā€™t see any taint set to the node and thatā€™s why when created the Fission environment all the pods were scheduled on this node correctly. Letā€™s taint the node and create the environment once again to check if the pods would be scheduled on this node or not. Run below command to taint the node

Ā» kubectl taint node kind-control-plane com.ic.priority=high:NoSchedule
node/kind-control-plane tainted

Now if we go ahead and describe the node once again we would be able to see that the node has been tainted with the provided key and value pairs, like below

Name:               kind-control-plane
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=kind-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
...
Taints:             com.ic.priority=high:NoSchedule
Unschedulable:      false
...
...

This means that Kubernetes scheduler will not be able to schedule the pods on this node if they donā€™t have toleration for the specific taints that have been set. Now letā€™s delete the Fission environment that we created earlier and create the environment again to check if the pods would get scheduled on the node.

Ā» fission env list
NAME   IMAGE                     BUILDER_IMAGE                 POOLSIZE MINCPU MAXCPU MINMEMORY MAXMEMORY EXTNET GRACETIME
python fission/python-env:latest fission/python-builder:latest 3        0      0      0         0         false  0

Ā» fission env delete --name python
environment 'python' deleted

Ā» fission env create --name python --image fission/python-env:latest --builder fission/python-builder:latest
environment 'python' created

Now to check if the pods have been scheduled on the node successfully, letā€™s list all the pod from fission-function and fission-builder namespaces.

Ā» kubectl get pods -n fission-function
NAME                                            READY   STATUS        RESTARTS   AGE
poolmgr-python-default-37512-76b7fbd65f-jpfm5   0/2     Pending       0          2m5s
poolmgr-python-default-37512-76b7fbd65f-snbr4   0/2     Pending       0          2m5s
poolmgr-python-default-37512-76b7fbd65f-w6r2s   0/2     Pending       0          2m5s

Ā» kubectl get pods -n fission-builder
NAME                           READY   STATUS    RESTARTS   AGE
python-37512-bfbb8f9f6-stzkp   0/2     Pending   0          2m15s

And as you can see the pods have not been scheduled on the node because we have tainted the node and describing the pod would clearly tell us that,

Ā» kubectl describe pods -n fission-function poolmgr-python-default-37512-76b7fbd65f-w6r2s
Name:           poolmgr-python-default-37512-76b7fbd65f-w6r2s
Namespace:      fission-function
...
...
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  79s (x4 over 4m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Using Fission to have custom tolerations in the builder or runtime pods

In that case, we can use Fission podspec to specify the spec for the pods that would get created when we create the Fission environment. And in podspec we can specify the toleration for the taint that the node has. To specify the podspec we will have to create the Fission environment using manifest instead of the command line.

Below is an example of Fission environment manifest that can be used to create a Fission environment that would spin up the pods with specified podspec.

apiVersion: fission.io/v1
kind: Environment
metadata:
  name: python
  namespace: default
spec:
  builder:
    command: build
    image: fission/python-builder:latest
    podspec:
      tolerations:
      - key: "com.ic.priority"
        value: "high"
        operator: "Equal"
        effect: "NoSchedule"
  imagepullsecret: ""
  keeparchive: false
  poolsize: 3
  resources: {}
  runtime:
    image: fission/python-env:latest
    podspec:
      tolerations:
      - key: "com.ic.priority"
        value: "high"
        operator: "Equal"
        effect: "NoSchedule"
  version: 2

You can create the above spec using ā€“spec flag with the fission env create command, please take a look at this documentation for details on Fission spec.

If we have a closer look into the above manifest, what we are trying to say is the pods that are going to be created for environment runtime and builder should have specified podspec. And if we just apply the above manifest to create the environment and then describe pods we should be able to see that the pods are created with these specified tolerations.

Ā» kubectl create -f env.yaml
environment.fission.io/python created

Ā» kubectl describe pods -n fission-function poolmgr-python-default-41586-585dbdbb49-r5htw
Name:           poolmgr-python-default-41586-585dbdbb49-r5htw
Namespace:      fission-function
Priority:       0
Node:           kind-control-plane/172.17.0.2
...
...
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     com.ic.priority=high:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
...
...

Ā» kubectl describe pod -n fission-builder python-41586-6d795c5744-9jdrg
Name:           python-41586-6d795c5744-9jdrg
Namespace:      fission-builder
Priority:       0
...
...
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     com.ic.priority=high:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
...
...

As we can see in the above snippet now the pods have been created with modified PodSpec and not the default ones, that includes the new toleration for the taint that the node has. And if we list the pods now, we should see that all the pods have successfully been scheduled on the node.

Ā» kubectl get pods -n fission-function
NAME                                            READY   STATUS    RESTARTS   AGE
poolmgr-python-default-41586-585dbdbb49-nzk4l   2/2     Running   0          6m11s
poolmgr-python-default-41586-585dbdbb49-p9djs   2/2     Running   0          6m11s
poolmgr-python-default-41586-585dbdbb49-r5htw   2/2     Running   0          6m11s

Ā» kubectl get pods -n fission-builder
NAME                            READY   STATUS    RESTARTS   AGE
python-41586-6d795c5744-9jdrg   2/2     Running   0          6m40s

This was how we can leverage Fission podspec to actually specify the Kubernetesā€™ podā€™s PodSpec that will be applied to the Fission function and builder pods that would be created. Like we specified the toleration of the pods that would be created, we can specify any PodSpec that we want to, using Fission environment manifest.

Using Fission to have volume attached to the builder or runtime podsā€™ containers

The other example that we are going to see in this post is how we can leverage Fission podspec to mount a volume into the runtime container of the pods that get created after creating the Fission environment. If we take a look into the pods that get created as result of creating an environment

apiVersion: v1
kind: Pod
metadata:
  ...
  name: poolmgr-python-default-45111-55c9f78877-h6wm9
  namespace: fission-function
  ...
spec:
  containers:
  - image: fission/python-env
    imagePullPolicy: IfNotPresent
    ...
    name: python
    ...
    volumeMounts:
    - mountPath: /userfunc
      name: userfunc
    - mountPath: /secrets
      name: secrets
    - mountPath: /configs
      name: configmaps
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: fission-fetcher-token-2d6kf
      readOnly: true
    ...
  volumes:
  - emptyDir: {}
    name: userfunc
  - emptyDir: {}
    name: secrets
  - emptyDir: {}
    name: configmaps
  - name: fission-fetcher-token-2d6kf
    secret:
      defaultMode: 420
      secretName: fission-fetcher-token-2d6kf
    ...

You can see that the main container (there would be another one named fetcher) of the pod has some volumes mounted in it, now our motive is to mount another volume into this container at the path, letā€™s say /var. To mount a volume, we will have to create a volume and we are going to use hostpath volume type in this example. To mount the volume into a container we will have to specify the container name and we will be getting that name from above pod manifest that we showed because that is the name of the container that Fission creates.

Now letā€™s look into the below manifest of a python environment

apiVersion: fission.io/v1
kind: Environment
metadata:
  name: python
  namespace: default
spec:
  builder:
    command: build
    image: fission/python-builder
  imagepullsecret: ""
  keeparchive: false
  poolsize: 3
  resources: {}
  runtime:
    image: fission/python-env
    podspec:
      containers:
      - name: python
        volumeMounts:
        - name: vol
          mountPath: /var
      volumes:
      - name: vol
        hostPath:
          path: /var
  version: 2

As you can see, in podspec its pretty easy to specify the volume and we are doing that using hostpath type. The question that arrives is where/which container we are going to mount this volume in. Now since our requirement is to mount the volume in the runtime environmentā€™s podā€™s python container, we can easily figure out, from the pods manifest above, the container name is going to be python and thatā€™s why we have mentioned name of container to be python where we want to mount the volume and then specified the name of the volume that should be mounted and the path where that volume should be mounted.

Letā€™s go ahead and create a Fission environment using above manifest

Ā» kubectl create -f python-env.yaml
environment.fission.io/python created

and now if we go ahead and take a look at the container of any of the runtime pods that have been created for this environment, we would be able to see we were successfully able to mount another volume in the container.

apiVersion: v1
kind: Pod
metadata:
  ...
  name: poolmgr-python-default-46843-5cc5fbdb6c-lf6wq
  namespace: fission-function
  ...
spec:
  containers:
  - image: fission/python-env
    imagePullPolicy: IfNotPresent
    ...
    name: python
    ...
    volumeMounts:
    - mountPath: /userfunc
      name: userfunc
    - mountPath: /secrets
      name: secrets
    - mountPath: /configs
      name: configmaps
    - mountPath: /var
      name: vol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: fission-fetcher-token-2d6kf
      readOnly: true
  ...
  volumes:
  - emptyDir: {}
    name: userfunc
  - emptyDir: {}
    name: secrets
  - emptyDir: {}
    name: configmaps
  - hostPath:
      path: /var
      type: ""
    name: vol
  - name: fission-fetcher-token-2d6kf
    secret:
      defaultMode: 420
      secretName: fission-fetcher-token-2d6kf
...

And as you can see we were able to mount a new volume in the container python at the path /var. To verify that the things are working correctly letā€™s exec into this pod and create a file at the /var location and we should be able to see that file in the hostpath.

Ā» kubeclt exec -it -n fission-function poolmgr-python-default-46843-5cc5fbdb6c-lf6wq bash
Defaulting container name to python.
Use 'kubectl describe pod/poolmgr-python-default-46843-5cc5fbdb6c-lf6wq -n fission-function' to see all of the containers in this pod.
bash-5.0# cd /var/
bash-5.0# mkdir volume-test
bash-5.0# cd volume-test/
bash-5.0# echo 'This file contains some data, that will be persisted in hostpath' > datafile
bash-5.0# ls -l
total 4
-rw-r--r--    1 root     root            65 Jul  3 19:52 datafile
bash-5.0# pwd
/var/volume-test
bash-5.0#

So, we have created a file inside a dir /var/volume-test, since hostpasthā€˜s /var is mounted on containerā€™s /var the new dir volume-test and its content should be available on host at the pat /var/volume-test/. Since we are running the Kubernetes cluster using kind, that eventually runs the things inside docker container, letā€™s figure out which container is running this cluster.

Ā» docker ps
CONTAINER ID        IMAGE                                                 COMMAND                  CREATED             STATUS              PORTS                                  NAMES
eb16e7bc094e        kindest/node:v1.15.3                                  "/usr/local/bin/entrā€¦"   11 hours ago        Up 11 hours         45923/tcp, 127.0.0.1:45923->6443/tcp   kind-control-plane

Now, exec into this docker container to check if we have the /var/volume-test/datafile that contains the text that we have written.

Ā» docker exec -it eb16e7bc094e bash
root@kind-control-plane:/# cd /var/volume-test/
root@kind-control-plane:/var/volume-test# ls -l
total 4
-rw-r--r-- 1 root root 65 Jul  3 19:52 datafile
root@kind-control-plane:/var/volume-test# cat datafile
This file contains some data, that will be persisted in hostpath
root@kind-control-plane:/var/volume-test#

and here we go as we can see we are able to see the content in the host path.

Caveat:

As of now we can not specify the ServiceAccount of the pods using Fission podspec in environment manifest, we are having discussion if we want to support that and if we agree on that, the support would be merged soon.

Looking for help with implementing serverless? learn why so many startups & enterprises consider us as one of the best serverless consulting & services providers.

Posts You Might Like

This website uses cookies to offer you a better browsing experience