Home Server Run K8s Simply: Single-Node Kubesphere + OpenELB

Run K8s Simply: Single-Node Kubesphere + OpenELB

K8s on One Node! Explore deploying a Kubernetes cluster with Kubesphere & OpenELB on a single machine. Ideal for development, testing, or lightweight workloads


KubeSphere is a distributed operating system built on Kubernetes for cloud-native applications. It is completely open source, supports multi-cloud and multi-cluster management, provides full-stack IT automated operation and maintenance capabilities, and simplifies enterprise DevOps workflows. In a production environment, it can be deployed on foreign hosting clouds such as GCE GKE, Azure AKS, AWS EKS, DigitalOcean, Linode, Oracle OKE, or on domestic hosting clouds such as Alibaba Cloud ACK, Tencent Cloud TKE, and Huawei Cloud CCE. .

A very convenient advantage of using a managed cloud is that you can directly use the cloud-based load balancer to expose services. However, if you just want to build a Linux-based K8S (Kubernetes) cluster, you need to solve the service exposure requirements yourself. . If you use K3S, which is Rancher’s set of things, you don’t have to think about this problem. Rancher’s out-of-the-box traefik can meet this point. Of course, you can also use other LB to solve it. This is beyond the scope of this article. Next Arrange again.

The following focus is still on KubeSphere’s K8S. Let’s talk about how K8S deployed and installed on Linux should expose services. OpenELB is used here. Its predecessor is PorterLB. It is true that OpenELB is easier to remember.

First, follow the installation instructions of KubeSphere to complete the installation of KubeSphere. The installation method adopted in this article is All-in-One, single node, and the machine configuration is Hetzner AX51-NVME server.

There are currently two ways to install OpenELB. One is to use kubectl directly, and the other is to use helm. Here, kubectl is used for installation, which I personally find more convenient and easier to understand and maintain.


Install OpenELB using kubectl

1. First use SSH to log in to the server where the Kubernetes cluster is located. After successful login, execute the following command:

kubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml

2. After execution, please execute the following command again to check openelb-managerthe status. When the status changes to READY1 ⁄ 1 and STATUS : Running , it means that OpenELB has been installed successfully.

kubectl get po -n openelb-system

After executing the second step, you should see the following output:

   openelb-admission-create-cjdg5      0 / 1      Completed    0           41s
   openelb-admission-patch-2f2sd       0 / 1      Completed    0           41s
   openelb-keepalive-vip-42z4h         1 / 1      Running      0           41s
   openelb-manager-774f8cc4dc-s2gbl    1 / 1      Running      0           41s

Use kubectl to remove OpenELB

If you want to delete OpenELB, first use SSH to log in to the server where the Kubernetes cluster is located. After successful login, execute the following command:

kubectl delete -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml

Please note! ! !

Before deleting OpenELB, you must ensure that openelb-systemthere are no services under the namespace. Especially when configuring eip later, you must delete it first Eipand then delete OpenELB. Otherwise, you will encounter Eipbugs that cannot be deleted.

After executing the delete command, you can execute the following command once to check whether openelb-systemthe namespace still exists under the namespace. If it is not found, OpenELB has been deleted.

kubectl get ns


In the latest version 0.5.0, OpenELB supports three modes, namely BGP mode, Layer2 mode, and VIP mode. BGP mode is suitable for users who can manage routers. Routing needs to support BGP routing and equal-cost multi-path routing; Layer 2 is easier for most users.

They only need to allow ARP/NDP packets in the computer room; VIP mode is the same as The requirements of Layer 2 are not much different, and it does not need to support ARP/NDP packets.

However, this mode is still in the testing stage and there are many uncertain factors. In addition, it requires that the Kuberneter cluster node has only one NIC, and in In my test, if there is only one IP address on this NIC, routing will be lost after using VIP mode.

After introducing the three modes, in our case, only using Layer2 can achieve the goal, so the following operations are based on the Layer2 mode. If you want to know more about the specific differences between the modes, please visit the official reference: https:// openelb.io/docs/getting-started/installation/install-openelb-on-kubernetes/

Step 1: Enable strictARP for kube-proxy

In Layer2 mode, kube-proxy needs to be enabled strictARPso that all NICs in the Kubernetes cluster stop responding to ARP requests from other NICs and let OpenELB handle the ARP requests.

1. Log in to the Kubernetes cluster and run the following commands to edit kube-proxy ConfigMap:

kubectl edit configmap kube-proxy -n kube-system

2. In the kube-proxy ConfigMap YAML configuration, config.conf.ipvs.strictARPset to True.

ipvs: strictARP: true

3. Run the following command to restart kube-proxy:

kubectl rollout restart daemonset kube-proxy -n kube-system

Step 2 (optional): Specify the network card for OpenELB

If the node where OpenELB is installed has multiple NICs, you need to specify the NIC used for OpenELB in layer 2 mode. If the node only has one NIC, you can skip this step.

In this example, the master 1 node with OpenELB installed has two NICs (eth0 and eth1, and we specify eth0 for OpenELB.

Run the following command to comment master 1 to specify the NIC:

kubectl annotate nodes master1 layer2. openelb ​​. kubesphere . io /v1alpha1= ""

Step 3: Create a public network EIP object (Eip)

The EIP object here serves as the IP address pool of OpenELB. If you have multiple IPs, fill in the starting IP and the ending IP. If you only have one IP, just fill in one. In this example, we only have one public IP.

1. Run the following command to create a YAML file for the EIP object:

vi layer2- eip.yaml

2. Add the following information to the YAML file:

apiVersion: network.kubesphere.io/v1alpha2
Kind: Eip
  name: layer2-eip
    address: 162.55 . 6.138 
    interface: eth0
    protocol: layer2

Please note! ! !

  • SPEC: The IP address specified in ADDRESS must be on the same network segment as the Kubernetes cluster node.
  • If you need to add multiple servers to the cluster, you should ensure that each node has an intranet connection available

Then run the following command to create an elastic public IP object:

kubectl apply -f layer2- eip.yaml

After completing the above steps, the installation of OpenELB is already halfway through, and we need to do the rest in KubeSphere.

Enable the cluster gateway in KubeSphere and specify OpenELB as the load balancer provider

Log in to the KubeSphere console, go to [Cluster] -> [Cluster Settings] -> [Gateway Settings], add a cluster gateway, select OpenELB as the load balancer provider, and add the following three items in the annotation (i.e. comment):

lb. kubesphere . io /v1alpha1: openelb
protocol. openelb ​​. kubesphere . io /v1alpha1: layer2
eip. openelb ​​. kubesphere . io /v1alpha2: layer2-eip


Please enter your comment!
Please enter your name here