Unlocking the Power of Kubernetes Operators for Network Automation [Part 1]

As Kubernetes becomes the go-to platform for modern infrastructure, it brings forth a robust ecosystem, GitOps support, and an inherent declarative nature. It’s not just a container orchestration platform; it’s a comprehensive data center solution, including network orchestration elements. Kubernetes nodes can even function as their own BGP Autonomous Systems, ushering in new possibilities for network automation.

Think about Nephio, a cool project under the Linux Foundation. It uses Kubernetes and lets you automate your network based on your intentions. That means you can easily say how you want your network to work, and Nephio makes it happen. It’s like telling your network what you want, and it does the rest.

In this blog post, we’ll dive into the world of Kubernetes operators for network management. We’ll show how to use kubebuilder in a quick tutorial to build the foundation of your first operator.

Join us on this journey as we explore how Kubernetes operators can make your network management simpler, with easy-to-understand examples that show you how it all works.

Kubernetes Operators

Kubernetes achieves automation through YAML-defined desired states for resources like ConfigMaps and services. The closed-loop concept continuously reconciles actual states with desired states, ensuring self-correcting infrastructure.

Kubernetes Operators are essential tools for managing complex or stateful applications within Kubernetes clusters. They extend Kubernetes’ capabilities by automating the deployment, scaling, and maintenance of these applications. Operators are particularly valuable for intricate workloads like databases, messaging systems, and stateful services, which demand more than the standard Kubernetes resources offer.

Operators employ Custom Resource Definitions (CRDs) to define and manage application-specific resources. CRDs enable users to create custom objects that represent their applications and their desired states. Operators then continuously monitor these CRDs, reconciling the actual state with the desired state defined in the CRDs. This self-driving, declarative approach streamlines the management of complex applications, ensuring they operate reliably and efficiently in Kubernetes environments, ultimately simplifying the management of complex, stateful apps.

Kubernetes Operators can automate network configuration, scaling, and maintenance in Kubernetes for network automation apps. They use Custom Resource Definitions (CRDs) to define network policies, ensuring consistent, dynamic network management within the Kubernetes ecosystem.

Network automation apps, tackling challenges like L3VPNs, traffic engineering, and EVPN tenants, benefit from Kubernetes Operators. These automate resource management while adhering to GitOps practices for a consistent source of truth, streamlining operations and enhancing network reliability in a dynamic environment.

Kubebuilder

Kubebuilder is a powerful development framework for building Kubernetes operators. It simplifies the creation of custom resources and controllers, streamlining the process of managing complex applications within Kubernetes.

Quick Start

To get started, you’ll need a Kubernetes cluster; I recommend using Kubernetes with Kind. A single-node cluster installation is sufficient, taking just a few minutes. Ensure you have Go 1.20+ installed (consider a fresh install by completely removing the existing Go folder) and then proceed to install Kubebuilder.

Official quick start documentation: https://go.kubebuilder.io/quick-start

In summary, follow this to install go (I use 1.21.4 release, Tested with Linux Rocky 9):

curl -LO https://golang.org/dl/go1.21.4.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.21.4.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
source ~/.bashrc
rm -f go1.21.4.linux-amd64.tar.gz
go version

Then kubebuilder:

curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/

Then, install kind and start your first cluster (you’ll need docker installed. in my case, I am using 24.0.6 )

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
kind create cluster

Kind will set your kubeconfig

Now, install kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

And finally `make`

dnf -y update
dnf -y install make

And we are good to go

You have to create a folder for all the files you will create. Then, use the following command (you can always delete everything inside the folder and start over, then don’t worry to much now on the details of every option in the command)

mkdir test01
cd test01
kubebuilder init --owner "Mau" --domain cloud-native-everything.com --repo github.com/cloud-native-everything/nsp-kube-operator

--domain and owner and repo don’t need to be anything official. At least for this test. That would create a bunch of files and folders with this structure.

cmd
config 
Dockerfile 
go.mod 
go.sum
hack
Makefile
PROJECT
README.md

Don’t need to do go init this it’s already working. You can now create your APIs.

kubebuilder create api --group app --version v1alpha1 --kind App
INFO Create Resource [y/n]                        
y
INFO Create Controller [y/n]                      
y

And you are good to go to customize your operator (check next section for that). After you have your changes, then, you can create manifests, install it (it would create the CRD in the cluster via your default kubeconfig, check if you have access to your cluster in advance with kubectl), and finally run

make manifest
make install
make run

You have also the option to create container images of you operator to run directly in your cluster. However, I won’t touch this part in this tutorial part. If you still want to try, check the official doc: https://go.kubebuilder.io/quick-start . You can use make for creating your container images too.

Exploring Kubebuilder files

After you created the APIs. You would see some files that would help to create your first operator. Let’s start with: api/v1alpha1/app_types.go

There’s a section in this file like this:

type AppSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
	// Important: Run "make" to regenerate code after modifying this file

	// Foo is an example field of App. Edit app_types.go to remove/update
	Foo string `json:"foo,omitempty"`
}

// AppStatus defines the observed state of App
type AppStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "make" to regenerate code after modifying this file
}

You can modify and start adding your own elements like

type AppSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
	// Important: Run "make" to regenerate code after modifying this file

	// Foo is an example field of App. Edit app_types.go to remove/update
	Name string `json:"name,omitempty"`
	Id string `json:"id,omitempty"`
}

// AppStatus defines the observed state of App
type AppStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "make" to regenerate code after modifying this file
	Name string `json:"name,omitempty"`
}

then, now, you can execute make manifest

Then, I new file would be generated with the structure of your CRD

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.13.0
  name: apps.app.cloud-native-everything.com
spec:
  group: app.cloud-native-everything.com
  names:
    kind: App
    listKind: AppList
    plural: apps
    singular: app
  scope: Namespaced
  versions:
  - name: v1alpha1
    schema:
      openAPIV3Schema:
        description: App is the Schema for the apps API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: AppSpec defines the desired state of App
            properties:
              id:
                type: string
              name:
                description: Foo is an example field of App. Edit app_types.go to
                  remove/update
                type: string
            type: object
          status:
            description: AppStatus defines the observed state of App
            properties:
              name:
                description: 'INSERT ADDITIONAL STATUS FIELD - define observed state
                  of cluster Important: Run "make" to regenerate code after modifying
                  this file'
                type: string
            type: object
        type: object
    served: true
    storage: true
    subresources:
      status: {}

Also, you can go and modify your controller at internal/controller/app_controller.go:

func (r *AppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	_ = log.FromContext(ctx)

	// TODO(user): your logic here

	return ctrl.Result{}, nil
}

You can edit it like this (display CRD values):

func (r *AppReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	l := log.FromContext(ctx)

	l.Info("Inside Reconcile function", "req", req)

	return ctrl.Result{}, nil
}

And finally, create an example of a Custom Resource Definition using the sample at config/samples/app_v1alpha1_app.yaml and modifying it to:

apiVersion: app.cloud-native-everything.com/v1alpha1
kind: App
metadata:
  labels:
    app.kubernetes.io/name: app
    app.kubernetes.io/instance: app-sample
    app.kubernetes.io/part-of: nsp-kube-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: nsp-kube-operator
  name: app-sample
spec:
  name: test
  id: 10101AB

Now, let’s install the CRD with make install (do a make manifests and a make generate in case you have changes in advance to this). You should be able to see now those CRDs installed in the k8s cluster:

[pinrojas@server nsp-kube-operator]$ kubectl get crd
NAME                                   CREATED AT
apps.app.cloud-native-everything.com   2023-11-10T22:37:42Z
[pinrojas@server nsp-kube-operator]$ kubectl get apps.app.cloud-native-everything.com
No resources found in default namespace.

But, you shouldn’t see any object created yet.

Create your sample now

[pinrojas@server nsp-kube-operator]$ sudo kubectl apply -f config/samples/app_v1alpha1_app.yaml 
app.app.cloud-native-everything.com/app-sample created
[pinrojas@server nsp-kube-operator]$ sudo kubectl get apps.app.cloud-native-everything.com
NAME         AGE
app-sample   10s
[pinrojas@server nsp-kube-operator]$ sudo kubectl describe apps.app.cloud-native-everything.com app-sample
Name:         app-sample
Namespace:    default
Labels:       app.kubernetes.io/created-by=nsp-kube-operator
              app.kubernetes.io/instance=app-sample
              app.kubernetes.io/managed-by=kustomize
              app.kubernetes.io/name=app
              app.kubernetes.io/part-of=nsp-kube-operator
Annotations:  <none>
API Version:  app.cloud-native-everything.com/v1alpha1
Kind:         App
Metadata:
  Creation Timestamp:  2023-11-10T22:42:15Z
  Generation:          1
  Resource Version:    224404
  UID:                 bea4a537-17df-41dd-ab5c-5c8fa8114b27
Spec:
  Id:    10101AB
  Name:  test
Events:  <none>

Now, Let’s check our controller using make run

[pinrojas@server nsp-kube-operator]$ sudo make run
test -s /home/pinrojas/nsp-kube-operator/bin/controller-gen && /home/pinrojas/nsp-kube-operator/bin/controller-gen --version | grep -q v0.13.0 || \
GOBIN=/home/pinrojas/nsp-kube-operator/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.13.0
/home/pinrojas/nsp-kube-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/home/pinrojas/nsp-kube-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
api/v1alpha1/app_types.go
go vet ./...
go run ./cmd/main.go
2023-11-10T16:44:45-06:00       INFO    setup   starting manager
2023-11-10T16:44:45-06:00       INFO    controller-runtime.metrics      Starting metrics server
2023-11-10T16:44:45-06:00       INFO    starting server {"kind": "health probe", "addr": "[::]:8081"}
2023-11-10T16:44:45-06:00       INFO    controller-runtime.metrics      Serving metrics server  {"bindAddress": ":8080", "secure": false}
2023-11-10T16:44:45-06:00       INFO    Starting EventSource    {"controller": "app", "controllerGroup": "app.cloud-native-everything.com", "controllerKind": "App", "source": "kind source: *v1alpha1.App"}
2023-11-10T16:44:45-06:00       INFO    Starting Controller     {"controller": "app", "controllerGroup": "app.cloud-native-everything.com", "controllerKind": "App"}
2023-11-10T16:44:45-06:00       INFO    Starting workers        {"controller": "app", "controllerGroup": "app.cloud-native-everything.com", "controllerKind": "App", "worker count": 1}
2023-11-10T16:44:45-06:00       INFO    Inside Reconcile function       {"controller": "app", "controllerGroup": "app.cloud-native-everything.com", "controllerKind": "App", "App": {"name":"app-sample","namespace":"default"}, "namespace": "default", "name": "app-sample", "reconcileID": "506512da-30b2-4f74-b4ef-0e35c8861453", "req": {"name":"app-sample","namespace":"default"}}

Over the last lines you can see our controller did the reconciliation and it will be trigger every time the CRD is invoke (like changing the specs in the YAML sample file). We will do a more advance test later. Stay tuned.

See ya!

Leave a Reply

Blog at WordPress.com.

%d bloggers like this: