On-Prem Kubernetes Configuration | ObjectSecurity OT.AI Platform

Modified on Mon, 25 Sep 2023 at 01:46 PM

This support article provides the requirements for installing an on-premises, air-gapped deployment of the ObjectSecurity OT.AI Platform on a Kubernetes cluster. We will also cover the required system specifications to host the deployment, configuration requirements, and an installation process overview. 


The ObjectSecurity OT.AI Platform uses Kubernetes to increase binary vulnerability analysis speed and scalability. This is accomplished by parallelizing the computation required for binary assessment across multiple nodes in a Kubernetes cluster.


Note:  Please submit a support ticket under onboarding support


Technical Requirements:

To facilitate a successful on-prem ObjectSecurity OT.AI Platform Kubernetes deployment, the following system specifications are recommended:

  • Each node in the Kubernetes cluster has a minimum of 4GB of RAM (recommended 8-16GB), and one node must be a minimum of 6GB.
  • Kubernetes cluster nodes must have a minimum of 80GB of available storage.
  • All remaining nodes in the Kubernetes cluster must have a minimum of 20GB of available storage.
  • Each node in the Kubernetes cluster has a minimum of 2 CPU cores (recommended 4 CPUs).
  • Minimum of 3 nodes in the Kubernetes cluster.


Preparing for an ObjectSecurity OT.AI Platform Kubernetes Cluster Deployment:

Estimated time to completion: two hours


This preparation guide assumes you have multiple virtual machines and bare-metal machines on the same network that the Kubernetes cluster can use. 


Preferably, the sole purpose of these machines should be to act as nodes in the ObjectSecurity OT.AI Platform Kubernetes cluster (e.g., the nodes should not be used for other computations). If this is not the case (e.g., you have a pre-configured Kubernetes cluster running pre-existing applications for your organization), then the minimum technical requirements must be increased accordingly. For example, suppose your pre-existing Kubernetes applications consume approximately 4GB of RAM on every node in the cluster. The cluster must have at least 8GB of available RAM to support the ObjectSecurity OT.AI Platform.

Additionally, this guide assumes that a one-time Internet connection can be performed to download the ObjectSecurity OT.AI Platform installer files and container images necessary from the ObjectSecurity Secure SFTP Portal or to run the ObjectSecurity OT.AI Platform Kubernetes deployment. If this is not the case or raises any other concerns, please let the ObjectSecurity Sales and Support Team know. Alternatively, we can send you the installer files and container images that are approved for your air-gapped lab via flash drive, disk, or another offline method.


Cluster Setup and Configuration:

It is preferred that a net-new Kubernetes cluster be configured using MicroK8s (https://microk8s.io/), as the Kubernetes distribution will primarily support the ObjectSecurity OT.AI Platform. Other Kubernetes distributions (e.g., Rancher, K3s, etc.) are also supported.


Instructions for installing and starting with MicroK8s can be found here: https://microk8s.io/docs/getting-started. It is important that the add-ons mentioned in Step 6 of the article be enabled:

$ microk8s enable dns
$ microk8s enable hostpath-storage


Instructions for using MicroK8s to create a multi-node cluster (i.e., adding nodes) can be found here: https://microk8s.io/docs/clustering. Once complete, a Kubernetes cluster containing at least three nodes that meet the required system specs should be running.


Additional documentation for configuring and managing a production Kubernetes cluster can be found here: https://kubernetes.io/docs/setup/production-environment/.


If you are using an insecure docker image registry (e.g., a registry exposed over HTTP instead of HTTPS), you must configure microk8s to be aware of the insecure registry. This can be done by following the instructions here: https://microk8s.io/docs/registry-private. Other Kubernetes distribution may have different requirements for working with insecure registries, so please refer to your Kubernetes distribution’s documentation for details.


Kubectl

A Kubernetes cluster is managed using a command-line tool called Kubectl. Kubectl is used to create, destroy, and update Kubernetes resources. MicroK8s comes with Kubectl pre-installed and can be aliased using the following command (which may be placed in ~/.bash_aliases):

$ alias kubectl='microk8s kubectl'

Alternatively, basic installation instructions for Kubectl can be found here: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/. Documentation for using Kubectl can be found here: https://kubernetes.io/docs/reference/kubectl/.

Helm

Helm (https://helm.sh/) is a package manager for Kubernetes that installs Kubernetes applications using a packaging format called chartsInstallation. The ObjectSecurity OT.AI Platform is installed as a Helm chart, so it is necessary to install Helm. Installation instructions can be accessed here: https://helm.sh/docs/intro/quickstart/.


Helm uses kubectl to run commands when managing charts. It uses the kubectl config found at $HOME/.kube/config. By default, microk8s does not store its configuration here. This can be remediated with the following command:

$ microk8s.kubectl config view –raw > $HOME/.kube/config


Container Image Registry

A Docker container image registry hosted in (or accessible to) the same network as the Kubernetes cluster is required. This registry can be hosted either inside or outside of the cluster itself. This registry stores the images required to run the ObjectSecurity OT.AI Platform. Instructions for creating and deploying a registry can be found here: https://docs.docker.com/registry/deploying/


Additionally, it is recommended that your username/password protect this registry. Instructions to do so can be found in the same article: https://docs.docker.com/registry/deploying/#restricting-access.

Network Filesystem Dependencies

The ObjectSecurity OT.AI Platform Kubernetes deployment utilizes a Network File System (NFS) to store binary assessment result data. This NFS does not have to be configured manually and will be installed as part of the Helm chart. However, each node in the Kubernetes cluster needs specific dependencies configured to host/communicate with this NFS. These dependencies can be installed and enabled by running the following commands on each node in the cluster.

$ sudo apt-get install nfs-kernel-server
$ sudo modprobe nfs
$ sudo modprobe nfsd

NOTE: The commands above reflect Debian-based systems. Other operating systems may require a slight tweak.

Installation of the ObjectSecurity OT.AI Platform Helm Chart

Once the Kubernetes cluster has been configured according to the previous instructions, the ObjectSecurity OT.AI Platform may be installed according to the following steps:


1.    Download the following files from the ObjectSecurity Secure SFTP portal:

a.    otai.tgz: The ObjectSecurity OT.AI Platform Helm Chart, exported as an archive file.

b.    otai_install_images.sh: A shell script used to pull the docker images required by the ObjectSecurity OT.AI Platform and push them to a local docker registry.

c.    otai_license: A license used to authenticate the ObjectSecurity OT.AI Platform deployment.

2.    Run the otai_install_images.sh script to push the required docker images to the local image registry. To save storage space, the docker images should be deleted from the host’s set of local docker images (by default, they are stored redundantly in your local image registry):

$ # removes a single image
$ docker rmi -f <image>
$ # removes all images
$ docker rmi $(docker images -a -q)

3.    One of the nodes of the cluster will be used to store all binary assessment data: this node will run the otai-nfs-server pod. You must specify the node you want to host this pod by running kubectl label nodes <your-node-name> otai-nfs=true

4.    Install the Helm Chart using Helm install. Helm installation includes the following optional parameters, each of which

should be changed depending on the installation environment:

a.    registry: The URL and port of your local docker image registry.

b.    registryUsername: The username used to access your local docker image registry.

c.    registryPassword: The password used to access your local docker image registry.

d.    memLimit [16]: The amount of memory (in Gi) you wish to make available to the ObjectSecurity OT.AI Platform deployment. Increasing this number will enable more assessments to run in parallel, dramatically increasing the speed of all analyses. The otai-operator pod will consume 4Gi of memory on one of the nodes, so the value of memLimit should be set with this in mind. For example, if you have a Kubernetes cluster with 20 Gi of total memory, you must subtract the 4Gi of memory required by the otai-operator and set memLimit to 16Gi.

e.    hostPath [/var/nfs]: The file path on a host node wherein data will be stored.

f.    nfsIP [10.96.5.74]: The static internal ClusterIP of the otai-nfs-service. This value may change if the default internal ClusterIP (10.96.5.74) is allocated or unallocatable for various reasons. 

When running Helm install, these optional parameters can be set using the --set <label>=<value> argument. An example follows:

$ helm install --set registry=registry:5000 --set registryUsername=admin --set registryPassword=password –set memLimit=16 otai otai.tgz

This example installs the ObjectSecurity OT.AI Platform Kubernetes deployment using a container image registry that can be found on the network at registry:5000, wherein this registry is protected with the username admin and the password password. The deployment only uses up to 16GB of RAM across the whole cluster due to the memLimit argument.

5.    Use your license file (otai_license) to validate the deployment by logging into the platform as the SUPERADMIN and uploading the license file on the Settings Page. The settings page can be found at http://localhost:30004/#/settings once the deployment has been installed. Without a valid license, the OT.AI Platform will refuse to start any assessments.


Installed Kubernetes Objects:

A successful installation of the ObjectSecurity OT.AI Platform Kubernetes deployment will install the following Kubernetes Objects (https://kubernetes.io/docs/concepts/overview/working-with-objects/):

  • otai Namespace: This namespace incorporates all the following Kubernetes objects together so that they may be cleanly separated from other applications running on pre-existing Kubernetes clusters.
  • otai-registry-secret Secret: This secret encodes the local container image registry’s username and password such that images may be pulled from it at runtime.
  • otai-nfs-server Deployment: This deployment hosts a Network File System used by other Deployments/Pods running in the cluster.
  • otai-nfs-service Service: This service exposes the otai-nfs-server IP address for use inside the cluster.
  • otai-operator Deployment: This deployment performs the bulk of the ObjectSecurity OT.AI Platform’s assessment orchestration, authentication, extraction, and input handling. It spawns other assessment containers at runtime to perform binary analyses.
  • otai-operator Service: This service exposes the user interface of the otai-operator outside of the cluster as a NodePort (i.e., port 30004).
  • otai-operator-service-account ServiceAccount, otai-operator-role Role, and otai-operator-rolebinding RoleBinding: These objects are used to give the otai-operator deployment permissions to run binary assessment pods within the cluster.
  • otai-pv Persistent Volume, otai-pvc Persistent Volume Claim, otai-nfs Storage Class: These objects are used to wrap the Network File System such that other Deployment/Pods may perform read/writes to/from it.


At runtime, ephemeral Pods are spawned and despawned within the cluster by the otai-operator while performing binary analyses.

Uninstallation

To uninstall the ObjectSecurity OT.AI Platform from the Kubernetes cluster, the following command may be used:

$ helm uninstall otai




Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article